text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On Mon, Apr 13, 2009 at 03:56:05PM -0400, John A. Sullivan III wrote: >? Not quite. multipathd had a private namespace in 5.2. but it didn't unmount all of the unnecessary mountpoints. This was changed in 5.3 for two reasons. 1. Otherwise if you unmounted a filesystem that had been mounted before you started multipathd, and then tried to remove the device, you couldn't, since the private namespace still had it open. 2. To catch configurations like yours. In RHEL 5.2, multipathd started up and worked, but if you ever lost access to /usr/local/sbin, multipathd would stop working. By unmounting the filesystems that could potentially disappear (or at least most of them), you can force people to do things in a way that makes multipathd fault tolerant. In rhel 5.2, multipath didn't make a private, in-memory copy of your script. It just used the one on the regular filesystem, which is the very thing that the private namespace was trying to avoid. >" Sorry for the confusion. You still need to call your script with /bin/bash in your actual device section, just like you originally were. But you also need a dummy device section to cause multipathd to pull that script into the private namespace. In the dummy device section, you need to reference the script directly. This is because multipathd only pulls in commands, not their arguments (even if the argument is a script to run). When I tested this setup before my first email, my multipath.conf devices section looked like this: devices { device { vendor "WINSYS" product "SF2372" path_grouping_policy group_by_prio prio_callout "/bin/bash /sbin/mpath_prio_one" } device { vendor "dummy" product "dummy" prio_callout "/sbin/mpath_prio_one" } } mpath_prio_one is a bash script that just echos 1. -Ben > > > > -- > dm-devel mailing list > dm-devel redhat com >
http://www.redhat.com/archives/dm-devel/2009-April/msg00157.html
CC-MAIN-2016-50
refinedweb
304
66.74
One feature often requsted for Scala in the last few years has been support for named and default arguments to methods and constructors. Version 2.8 adds that support, and opens up a variety of opportunities for improving APIs and making your code clearer. I'll show you what this looks like at the call-site first, and then show you how you can write your own methods and constructors to take advantage of this new feature. Let's say we have a method that sends an email to a somebody. Before Scala 2.8, I might call this method as follows: sendEmail("jon.pretty@example.com", List("recipient@example.com"), "Test email", b, Nil, Nil, Nil) This sends an email to recipient@example.com with the subject, "Test email", and a body, b (which I'm not going to detail here, but let's just assume it's some text). Probably like the majority of emails you write, the CC and BCC lists are empty, and the email has no attachments. So it's quite a short and concise method call, but it offers no help in reminding me which way round my CC, BCC and attachment lists go, either when I'm writing the code or when I'm reading it back later. The same method call taking advantage of Scala 2.8's named and default arguments might look like this: sendEmail(from = "jon.pretty@example.com", to = List("recipient@example.com"), subject = "Test email", body = b) Isn't that much clearer? By naming the arguments, I can happily leave out those which have default values, shuffle their order, and spend less time worrying about the (maybe arbitrary) order in which they were defined. sendEmail( body = b, subject = "Test email", to = List("recipient@example.com"), from = "jon.pretty@example.com", attachments = List(file1, file2) ) Named and default arguments can be used in all parameter lists, class or case class constructors included. As a bonus, all case classes now come with a copy method, allowing you to replicate your case class instance by specifying (by name, of course) those arguments which should differ in the copy, e.g.: case class Fruit(name : String, color : String, citrus : Boolean) val apple = Fruit("Apple", "green", false) val pear = apple.copy(name = "Pear") val lime = apple.copy(name = "Lime", citrus = true) One small point worth noting is that it's not unusual to have identifiers in the scope of your method call with the same names as the method's arguments. Rather than add indirection and have to rename your identifiers, there's no problem at all writing: val to = ... val subject = ... val body = ... sendEmail(to = to, subject = subject, body = body) and having it do just what you want it to! One particularly convenient aspect of the named arguments feature is that all existing Scala method and constructor definitions can already be called in the named-argument style straight out of the box, for example: List("Hello", "world").mkString(sep = " ", end = "!", start = "") will give you Hello world!. However, there comes a trap with this convenience! Until now, the names of arguments were a somewhat arbitrary choice for library developers, and weren't considered an important part of the API. This has suddenly changed, so that a method call to mkString(sep = " ") will fail to compile if the argument sep were renamed to separator in a later version. Scala 2.9 implements a neat solution to this problem, but while we're waiting for that, be cautious about referring to arguments by name if their names may change in the future. So, we've already learned that all Scala methods can be called using the new syntax without any change to their definitions, but you still have to supply every parameter unless defaults are provided. Luckily adding them is very intuitive. Taking our sendEmail example, I want to specify default values for the subject, cc, bcc and attachments arguments: def sendEmail( to : List[String], from : String, subject : String = "No subject", cc : List[String] = Nil, bcc : List[String] = Nil, attachments : List[File] = Nil ) = ... It's really as simple as writing = default value after each parameter you want to have a default! The default values needn't be constant literals either; you can call methods or put blocks of code. You can even refer to antecedent arguments, provided they are in a different parameter list. Here's a more complex example with a side-effect on a default parameter. Needless to say, default parameters are evaluated lazily! def fibonacci(a : Int = 0)(b : Int = { log("Using default!") a + 1 }) = ... Finally, it's possible to provide default values for implicit arguments too, to be used when no implicit value of that type is within scope, thus ensuring that a call to such a method will never fail to compile due to an unresolved implicit, e.g.: def log(msg : => String)(implicit logger : PrintStream = System.out) = logger.println(msg) Hopefully this will give you a good overview of the possibilities Scala 2.8's named and default arguments opens up! There's more detail available in Lukas Rytz's original Named and Default Arguments SID, who very kindly helped with this article, and in Section 6.6.1 of the Scala Language Specification. Have a question or opinion about Scala's collections API? Discuss this article in the Articles Forum topic, What's New in Scala 2.8: Named and Default Parameters. The Scala programming language website is at: The Scala 2.8 release notes are at: The Scaladoc collections API is at: Jon Pretty is an experienced Scala developer, consultant and evangelist. Having launched the very first commercial Scala applications in 2005, Jon has been involved with Scala for the last six years and has overseen numerous successful Scala deployments to UK Government and commercial enterprises. Jon currently splits his time between working with the Scala Solutions team in Switzerland and Scala Technology in the UK.
http://www.artima.com/scalazine/articles/named_and_default_parameters_in_scalaP.html
CC-MAIN-2013-48
refinedweb
990
62.27
Font(u32 Color, u32 Size, const void *MemFont, u32 MemFont_size, Minimum *min); #include "Andale_ttf.h" //This file will be created from the Makefile. //just put your Andale.ttf inside the "data" folder int main(){ NoRSX gfx(); Font A(COLOR_WHITE, 40, Andale_ttf, Andale_ttf_size, &gfx); //this will allow you to print on the screen a White //text with the Andale ttf and with a size of 40 pixel. //to print any kind of text i've added a printf function: A.Printf(100,300,"This is an example %i", 10); //This will print on the screen a text at the position (100,300) //on the screen, with written "This is an example 10". } Font A(COLOR_WHITE, 40, Andale_ttf, Andale_ttf_size, &gfx); Font A(COLOR_WHITE, 40, "/dev_hdd0/game/NORSX0000/Andale.ttf", &gfx); NoRSX *GFX = new NoRSX(RESOLUTION_1280x720); Font F2("data/Sans.ttf" ,GFX); IMG.LoadPNG("data/PsDev.png", &png); u32 imgX =(GFX->width/2)-(png.width/2), imgY = (GFX->height/2)-(png.height/2); <![if !IE]> 4 Comments - Go to Forum Thread »: I have just updated both NoRSX versions (PS3 and PC) have fun! PS3 changelog 0.2.2: PC changelog 0.0.5: How to use THIS LIB IS JUST FOR PSL1GHT V2 This doesn't work on v1. but it can be ported easily. More PlayStation 3 News......
http://www.ps3news.com/ps3-hacks-jailbreak/norsx-ps3-2d-graphic-psl1ght-sdk-library-updated-ttf-support/
CC-MAIN-2014-15
refinedweb
216
77.84
Educator's Guide In this section: - Overview - PyCharm Edu basics - Prerequisites - Enabling the Course Creator features - Creating a course - Creating the first lesson - Writing a task text - Writing an exercise code for a task - Creating answer placeholders - Previewing a task - Executing your exercise code - Writing tests for a task - Wrapping everything - Creating the course archive to share it with students - Quick creation of educational course - From the student’s viewpoint... - Publishing a course to Stepik Overview -. Prerequisites Make sure that the following prerequisites are met: - You are working with the latest version of PyCharm Edu. - You have a Python interpreter properly installed on your computer. Refer to the section Configuring Python Interpreter. Enabling the Course Creator features This step is mandatory! Unless the course creator features are enabled, you cannot create a course! Course Creator features are disabled by default. You can enable course creator feature in one of the following ways: - When you first start PyCharm Edu, you see the following image: Choose the option "teacher", and then proceed with creating project. - If you already have an open project, select the check-command Enable Course Creator Features on the menu: The commands that allow you to create courses appear. Creating a course You can create a project in two possible ways: either from the Welcome Screen by choosing the Create new course command, or by choosing File | Create New Course on the main menu: In the Create Project dialog, see in the Project view, PyCharm Edu has created some infrastructure: There is one top-level node in the Project tool window: PyCharmTutorialCourse . If you expand it, the nested elements become visible. - Under the number 1 (PyCharmTutorialCourse), you see the files and folders pertaining to your new project. As you can see, there is not so much so far… - The folder lesson1denoted with the icon . allow you to check the task execution ( ), move between tasks ( ), refresh task ( ), view hints ( ), and edit tasks ( ). Note the following: 1) these actions are available to the students. 2) Select Project Type dialog. Refer to the section Configuring Python Interpreter for details. and press Shift+F6: Note that other refactorings work for the lessons and tasks too: Copy, Safe Delete. Next, let's add an image file that should be read. PyCharm Edu makes importing such a file quite easy - just drag it to the Project tool window, and then specify the target directory in the Move dialog: This image file does not belong to the Read Images task - so, let's add it. To do so, right-click the image file, and choose : Writing a task text Now it’s time to write a task description. Go to the Task Description tool window and click . You see that HTML markup appeared: <html> Write your task text here. <br> </html> (Note that you can also use Markdown to write your description… The language is configured in the page Tools | Education of the Settings/Preferences dialog.) Instead of the existing text, type the following: <html> Use <b>imread</b> function to load PyCharm Edu logo and play with it a little bit. <br> </html> read_images.py file : PyCharm Edu shows the Add Answer Placeholder dialog box. - Use this dialog box to specify the text that will replace the selected fragment in the students’ project. If you want to show a prompt to your students, or a theoretical help for the specific answer placeholders, If you want to show a prompt to your students, or a theoretical help for the specific answer placeholder, just type the hint text. If you want to add more hints, click : - Click OK when ready. That’s it! The task with a question is ready. If you now open the context menu of the answer placeholder in question, you will notice three more menu items: Edit Answer Placeholder, Delete Answer Placeholder and Delete All Answer Placeholders, which are quite self-explanatory. Previewing a task You would probably like to see how your task will be viewed by your students. To do that, right-click the task file in the Project tool window, and choose on the context menu. In the example we are working on, right-click the file read_images.py: PyCharm Edu immediately opens in the editor the file read_images.py, as it will be run in the Project view. - tests the following:") task_file = import_task_file() if task_file.image_name == "PyCharm.png": passed() else: failed("PyCharm logo filename is incorrect") if __name__ == '__main__': run_common_tests() test_answer_placeholders() Check your tests and exercise code OK, let's try to execute exercise code and tests for our example. To run the exercise code, open the file.py pre-defined> - Open the file write_image.pyfor editing and enter the following code: from skimage import io from skimage import data coffee = data.coffee() filename = "coffee.png" io.imsave(filename, coffee) - In the file write_image.py, create the task window. To do that, select the piece of code filename, coffeeand choose from the context menu of the selection. - In the Add Answer Placeholder dialog, write the text that will be shown to the students, and the hint: -() - Add the image file coffee-answer.pngto the root of the project. - Run the test. To do that, click in the left: - In the Project Tool Window, select the lesson Advanced, and press Alt+Insert again to create a new task named Let's swirl PyCharm. - Rename the file task.pyto swirl_pycharm.py(Shift+F6), open it for editing (F4) and type the following code: from skimage import io from skimage.transform import swirl image_name = "PyCharmand swirled2.png. - Then, add the answer placeholder to the code rotation=0, strength=20, radius=120, order=2of the statement swirled2 = swirl(img, rotation=0, strength=20, radius=120, order=2)with the text “Сhoose your parameters”. - Add the image PyCharm.pngto the root of the task Let's swirl PyCharm. -() - Write the task text in the Task Description tool window ( ): <html> Use <b>imread</b> function to load PyCharm logo and play with it a little bit. </br>.txtwith the following text: Hope you enjoyed our tutorial! Create your own courses and have fun! - tests.py Creating the course archive to share it with students OK, your first course is ready. What’s next? - Right-click anywhere in the Project tool window and choose Course Creator | Generate Course Archive: - Type the archive name and location (or accept defaults) in the dialog box that opens: After clicking OK, PyCharm Edu notifies that the course has been created successfully: View the archive file PyCharmTutorialCourse.zipwith the actual course archive in the Project tool window: Students can use PyCharmTutorialCourse.zipto go through your course! Quick creation of educational course.PyCharm Edu shows the login dialog: PyCharm Edu saves these credentials in the Education page of the Settings/Preferences dialog. Note that your course is private by default. To make it public, you have to clear the checkbox Private course (invite learners by private invitation link) of the course settings. That's it, folks... Congrats! You've created you first educational course.
https://www.jetbrains.com/help/pycharm-edu/2017.3/educator-s-guide.html
CC-MAIN-2018-51
refinedweb
1,172
63.9
* A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Products » BEA/Weblogic Author Multiple Developers, One WebLogic Instance Kory Lasker Greenhorn Joined: Oct 17, 2002 Posts: 4 posted Oct 17, 2002 12:45:00 0 How do you guys deploy applications into a WebLogic 6.1/7.0 environment with an admin server and multiple managed servers without running into namespace problems? For example, let's say I have an EJB called MyEJB. When I deploy this into WL on one of the managed server instances, I can also target MyEJB to other managed servers as well. Obviously, if two developers are trying to modify the same EJB they're going to deploy over each others' instance. Back in the 5.1 days it was easy enough just to create multiple, completely separate instances. But, with the larger footprint of a WL Domain, it becomes very difficult to justify creating multiple domains for each developer. Any ideas on how a bunch of developers should organize a WL Server? Thanks, Kory Dave Landers Ranch Hand Joined: Jul 24, 2002 Posts: 401 posted Oct 17, 2002 15:45:00 0 Around here where I work, we don't share servers. Just run what you need on on your desktop/dev machine and you can do what you want. If I need to "bounce" the server for some reason, I can do it without messing someone else up. And if I need to test something in a cluster, I'll just start up a cluster on my desktop (using several ports). Or I can "check out" machines from the Lab if I need something more powerful of if I need a networked cluster. To help with this, we have some basic domain configurations (config.xml, etc) checked into source control and an ant-based build that "localizes" them (if necessary) for individual dev environments (machine names, databases, etc). But of course, at some time you have to get all that work from individual developers together in one place. So individual developers will check in their stuff to source control. Developers are also responsible for keeping their local work area up to date and built with a reasonably recent version of the whole project (check out other's stuff). Of course we have build scripts and regression (checkin) tests to help with this. To check the integration, we have several things. One, we have an automated service that checks out, builds, and runs a simple set of tests every few hours. We also have a daily build that runs a more extensive set of regression tests. And then every few weeks we produce a kit with an installer that represents the "latest and greatest" from dev and hand this off to the QA group, where it gets more abuse. This doesn't work for everyone, so if you have to have a single domain for some reason, you are just going to have to cooperate with each other. If you establish some sort of naming scheme and trust everyone to follow it that should work - for example maybe you should never deploy "MyEjb" but you should deploy "Kory_Lasker_MyEjb" and use the "Kory_Lasker_ConnectionPool", etc. Group (rather than individual) resources would get some name after the group. Chris Mathews Ranch Hand Joined: Jul 18, 2001 Posts: 2712 posted Oct 18, 2002 07:02:00 0 I agree with Dave. Making an entire team of developers use a single WebLogic instance is just handicapping their productivity. This is the reason WebLogic sells single connection developer licenses (though it is rumored dev licenses will be free in the future). Kory Lasker Greenhorn Joined: Oct 17, 2002 Posts: 4 posted Oct 18, 2002 10:20:00 0 Thanks, Chris and Dave. I took an ever-so-slightly different approach by just creating a number of different standalone WL domains on the same machine. There really is no need to use managed domains for development in our case. This grants each developer a seperate instance that can be restarted at will, yet allows me to simply configure all common resources to be available from this one machine rather than 5 or 6. In addition, I have no licensing problems to work around. The ant build and deployment scripts are configured to adapt to each developers environment with an eye towards a shared build in a test environment. Thanks for the inspiration, guys! Kory I agree. Here's the link: subject: Multiple Developers, One WebLogic Instance Similar Threads Creating Clusters Weblogic 6.0 Problem in Deploying the ejb jar in WL 7.0 How to find a Managed Server in Busy? please answer this 128 questions for WLS. Urgently! DEV environment - best practice All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/65815/BEA-Weblogic/Multiple-Developers-WebLogic-Instance
CC-MAIN-2014-35
refinedweb
815
61.77
What's New in DFS Namespaces and DFS Replication in Windows Server 2012 Published: July 3, 2013 Updated: July 3, 2013 Applies To: Windows Server 2012 This topic describes the features that were added to DFS Namespaces (DFS or DFS-N) and DFS Replication (DFSR or DFS-R) in Windows Server 2012. For information about the features that were added in Windows Server 2012 R2, see What's New in DFS Namespaces and DFS Replication in Windows Server 2012. DFS Namespaces and DFS Replication are role services in the File and Storage Services role. DFS Namespaces enables you to group shared folders that are located on different servers into one or more logically structured namespaces. DFS Replication enables you to efficiently replicate folders (including those referred to by a DFS namespace path) across multiple servers and sites. The following table describes the changes in DFS Namespaces and DFS Replication the new reparse point tag IO_REPARSE_TAG_DEDUP. The following features are included.
http://technet.microsoft.com/en-us/library/dn270370(d=printer).aspx
CC-MAIN-2013-48
refinedweb
160
57.3
Fast, typo tolerant, fuzzy search engine for building delightful search experiences ⚡ 🔍 Typesense is a fast, typo-tolerant search engine for building delightful search experiences. ✨ Here are a couple of live demos that show Typesense in action on large datasets: 🗣️ 🎥 If you prefer watching videos, here's one where we introduce Typesense and show a walk-through: Don't see a feature on this list? Search our issue tracker if someone has already requested it and upvote it, or open a new issue if not. We prioritize our roadmap based on user feedback, so we'd love to hear from you. Typesense is used by a range of users across different industries. We've only recently started documenting who's using it in our Showcase. If you'd like to be included in the list, please feel free to edit SHOWCASE.md and send us a PR. Option 1: You can download the binary packages that we publish for Linux (x86-64) and Mac. Option 2: You can also run Typesense from our official Docker image. Option 3: Spin up a managed cluster with Typesense Cloud: Here's a quick example showcasing how you can create a collection, index a document and search it on Typesense. Let's begin by starting the Typesense server via Docker: docker run -p 8108:8108 -v/tmp/data:/data typesense/typesense:0.19.0 --data-dir /data --api-key=Hu52dwsas2AdxdE We have API Clients in a couple of languages, but let's use the Python client for this example. Install the Python client for Typesense: pip install typesense We can now initialize the client and create a companiescollection: import typesense client = typesense.Client({ 'api_key': 'Hu52dwsas2AdxdE', 'nodes': [{ 'host': 'localhost', 'port': '8108', 'protocol': 'http' }], 'connection_timeout_seconds': 2 }) create_response = client.collections.create({ "name": "companies", "fields": [ {"name": "company_name", "type": "string" }, {"name": "num_employees", "type": "int32" }, {"name": "country", "type": "string", "facet": True } ], "default_sorting_field": "num_employees" }) Now, let's add a document to the collection we just created: document = { "id": "124", "company_name": "Stark Industries", "num_employees": 5215, "country": "USA" } client.collections['companies'].documents.create(document) Finally, let's search for the document we just indexed: search_parameters = { 'q' : 'stork', 'query_by' : 'company_name', 'filter_by' : 'num_employees:>100', 'sort_by' : 'num_employees:desc' } client.collections['companies'].documents.search(search_parameters) Did you notice the typo in the query text? No big deal. Typesense handles typographic errors out-of-the-box! A step-by-step walk-through is available on our website here. This will guide you through the process of starting up a Typesense server, indexing data in it and querying the data set. Here's our official API documentation, available on our website:. If you notice any issues with the documentation or walk-through, please let us know or send us a PR here:. While you can definitely use CURL to interact with Typesense Server directly, we offer official API clients to simplify using Typesense from your language of choice. The API Clients come built-in with a smart retry strategy to ensure that API calls made via them are resilient, especially in an HA setup. If we don't offer an API client in your language, you can still use any popular HTTP client library to access Typesense's APIs directly. Here are some community-contributed clients and integrations: We welcome community contributions to add more official client libraries and integrations. Please reach out to us at [email protected] or open an issue on Github to collaborate with us on the architecture. 🙏 You can use our InstantSearch.js adapter to quickly build powerful search experiences, complete with filtering, sorting, pagination and more. Here's how: We'd love to benchmark with larger datasets, if we can find large ones in the public domain. If you have any suggestions for structured datasets that are open, please let us know by opening an issue. We'd also be delighted if you're able to share benchmarks from your own large datasets. Please send us a PR! Elasticsearch is a large piece of software, that takes non-trivial amount of effort to setup, administer, scale and fine-tune. It offers you a few thousand configuration parameters to get to your ideal configuration. So it's better suited for large teams who have the bandwidth to get it production-ready, regularly monitor it and scale it, especially when they have a need to store billions of documents and petabytes of data (eg: logs). Typesense is built specifically for decreasing the "time to market" for a delightful search experience. It is a light-weight yet powerful & scaleable alternative that focuses on Developer Happiness and Experience with a clean well-documented API, clear semantics and smart defaults so it just works well out-of-the-box, without you having to turn many knobs. Elasticsearch also runs on the JVM, which by itself can be quite an effort to tune to run optimally. Typesense, on the other hand, is a single light-weight self-contained native binary, so it's simple to setup and operate. See a side-by-side feature comparison here. Algolia is a proprietary, hosted, search-as-a-service product that works well, when cost is not an issue. From our experience, fast growing sites and apps quickly run into search & indexing limits, accompanied by expensive plan upgrades as they scale. Typesense on the other hand is an open-source product that you can run on your own infrastructure or use our managed SaaS offering - Typesense Cloud. The open source version is free to use (besides of course your own infra costs). With Typesense Cloud we do not charge by records or search operations. Instead, you get a dedicated cluster and you can throw as much data and traffic at it as it can handle. You only pay a fixed hourly cost & bandwidth charges for it, depending on the configuration your choose, similar to most modern cloud platforms. From a product perspective, Typesense is closer in spirit to Algolia than Elasticsearch. However, we've addressed some important limitations with Algolia: Algolia requires separate indices for each sort order, which counts towards your plan limits. Most of the index settings like fields to search, fields to facet, fields to group by, ranking settings, etc are defined upfront when the index is created vs being able to set them on the fly at query time. With Typesense, these settings can be configured at search time via query parameters which makes it very flexible and unlocks new use cases. Typesense is also able to give you sorted results with a single index, vs having to create multiple. This helps reduce memory consumption. Algolia offers the following features that Typesense does not have currently: geo spatial searches, personalization & server-based search analytics. With Typesense, we intend to bridge this gap, but in the meantime, please let us know if any of these are a show stopper for your use case by creating a feature request in our issue tracker. See a side-by-side feature comparison here. A fresh Typesense server will consume about 30 MB of memory. As you start indexing documents, the memory use will increase correspondingly. How much it increases depends on the number and type of fields you index. We've strived to keep the in-memory data structures lean. To give you a rough idea: when 1 million Hacker News titles are indexed along with their points, Typesense consumes 165 MB of memory. The same size of that data on disk in JSON format is 88 MB. If you have any numbers from your own datasets that we can add to this section, please send us a PR! From our experience companies are generally concerned when libraries they use are GPL licensed, since library code is directly integrated into their code and will lead to derivative work and trigger GPL compliance. However, Typesense Server is server software and we expect users to typically run it as a separate daemon, and not integrate it with their own code. GPL covers and allows for this use case generously (eg: Linux is GPL licensed). Now, AGPL is what makes server software accessed over a network result in derivative work and not GPL. And for that reason we’ve opted to not use AGPL for Typesense. Now, if someone makes modifications to Typesense server, GPL actually allows you to still keep the modifications to yourself as long as you don't distribute the modified code. So a company can for example modify Typesense server and run the modified code internally and still not have to open source their modifications, as long as they make the modified code available to everyone who has access to the modified software. Now, if someone makes modifications to Typesense server and distributes the modifications, that's where GPL kicks in. Given that we’ve published our work to the community, we'd like for others' modifications to also be made open to the community in the spirit of open source. We use GPL for this purpose. Other licenses would allow our open source work to be modified, made closed source and distributed, which we want to avoid with Typesense for the project’s long term sustainability. Here's more background on why GPL, as described by Discourse:. Many of the points mentioned there resonate with us. Now, all of the above only apply to Typesense Server. Our client libraries are indeed meant to be integrated into our users’ code and so they use Apache license. So in summary, AGPL is what is usually problematic for server software and we’ve opted not to use it. We believe GPL for Typesense Server captures the essence of what we want for this open source project. GPL has a long history of successfully being used by popular open source projects. Our libraries are still Apache licensed. If you have specifics that prevent you from using Typesense due to a licensing issue, we're happy to explore this topic further with you. Please reach out to us. 👋 🌐 New: If you have general questions about Typesense, want to say hello or just follow along, we'd like to invite you to join our Slack Community. We also do virtual office hours every Friday. Reserve a time slot here. If you run into any problems or issues, please create a Github issue and we'll try our best to help. We strive to provide good support through our issue trackers on Github. However, if you'd like to receive private & prioritized support with: We do offer Paid Support options. Please reach out to us at [email protected] to sign up. We are a lean team on a mission to democratize search and we'll take all the help we can get! If you'd like to get involved, here's information on where we could use your help: Contributing.md If you'd like to get updates when we release new versions, click on the "Watch" button on the top and select "Releases only". Github will then send you notifications along with a changelog with each new release. We also post updates to our Twitter account about releases and additional topics related to Typesense. Follow us here: @typesense. 👋 🌐 New: We'll also post updates on our Slack Community. Building with Docker The docker build script takes care of all required dependencies, so it's the easiest way to build Typesense: TYPESENSE_VERSION=nightly ./docker-build.sh --build-deploy-image --create-binary [--clean] [--depclean] Building on your machine Typesense requires the following dependencies: ./build.sh --create-binary [--clean] [--depclean] The first build will take some time since other third-party libraries are pulled and built as part of the build process. © 2016-2021 Typesense Inc.
https://xscode.com/typesense/typesense
CC-MAIN-2021-17
refinedweb
1,944
62.17
It looks like you're new here. If you want to get involved, click one of these buttons! from __future__ import print_function import random list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', ' '] for n in range(70): random.shuffle(list) for n in list: print(n, end='') Benedikt Bramböck said: There is also Typable by Ondrej Jób. Miguel Sousa constructed this one. If you click on Options, you can change scripts even. I've coded a somewhat related CLI script that you feed UFO and TXT files and it outputs strings of specific length, ideal for finding those nice specimen words of different weights with same font size and length. Aside from reading the available glyphs from the UFO and filtering the possible words by that it also has options for excluding / including letters and letter combinations.
http://typedrawers.com/discussion/comment/35013
CC-MAIN-2018-30
refinedweb
156
77.77
IRC log of svg on 2009-11-23 Timestamps are in UTC. 20:01:26 [RRSAgent] RRSAgent has joined #svg 20:01:26 [RRSAgent] logging to 20:01:28 [trackbot] RRSAgent, make logs public 20:01:28 [Zakim] Zakim has joined #svg 20:01:30 [trackbot] Zakim, this will be GA_SVGWG 20:01:30 [Zakim] ok, trackbot; I see GA_SVGWG()3:00PM scheduled to start now 20:01:31 [trackbot] Meeting: SVG Working Group Teleconference 20:01:31 [trackbot] Date: 23 November 2009 20:02:00 [Zakim] GA_SVGWG()3:00PM has now started 20:02:06 [Zakim] +??P2 20:02:13 [ed] Zakim, ?? is me 20:02:13 [Zakim] +ed; got it 20:03:21 [Zakim] +??P5 20:03:32 [Zakim] +Shepazu 20:03:34 [jwatt] Zakim, ??P5 is me 20:03:34 [Zakim] +jwatt; got it 20:03:54 [Zakim] +Shepazu.a 20:04:10 [ChrisL] ChrisL has joined #svg 20:04:16 [shepazu] \me zakim, drop shepazu 20:04:27 [shepazu] zakim, drop shepazu 20:04:27 [Zakim] Shepazu is being disconnected 20:04:28 [Zakim] -Shepazu 20:04:32 [jwatt] Zakim, who's here? 20:04:32 [Zakim] On the phone I see ed, jwatt, Shepazu.a 20:04:33 [Zakim] On IRC I see ChrisL, Zakim, RRSAgent, shepazu, trackbot, ed, jwatt, karl, anthony, ed_work, eseidelDesk 20:06:16 [Zakim] +Chris_Lilley 20:06:51 [ChrisL] zakim, who is noisy? 20:07:03 [Zakim] ChrisL, listening for 10 seconds I heard sound from the following: jwatt (20%), Chris_Lilley (41%) 20:07:49 [shepazu] scribenick: shepazu 20:08:04 [shepazu] chair: ed 20:08:22 [shepazu] Topic: SVG 1.1 2nd Ed 20:08:44 [ed] 20:09:03 [ed] 12 open issues, 22 open actions 20:10:52 [shepazu] issue-2013? 20:10:52 [trackbot] ISSUE-2013 -- Percentages in clipPath/pattern/filter/mask unintuitive -- OPEN 20:10:52 [trackbot] 20:10:52 [ChrisL] issue-2013? 20:10:52 [trackbot] ISSUE-2013 -- Percentages in clipPath/pattern/filter/mask unintuitive -- OPEN 20:10:53 [trackbot] 20:11:55 [ed] moved to SVG Core 2.0 20:11:58 [shepazu] Resolution: we will move this to SVG 2.0 20:14:01 [shepazu] issue-2017? 20:14:01 [trackbot] ISSUE-2017 -- Find sane values for getSubStringLength and selectSubString -- OPEN 20:14:01 [trackbot] 20:14:47 [ed] ACTION-2325? 20:14:47 [trackbot] ACTION-2325 -- Doug Schepers to propose wording and examples for ISSUE-2107 -- due 2008-10-30 -- CLOSED 20:14:47 [trackbot] 20:16:01 [ed] 20:16:35 [ed] 20:18:13 [Zakim] +[IPcaller] 20:18:35 [anthony] Zakim, [IP is me 20:18:35 [Zakim] +anthony; got it 20:19:32 [shepazu] ed: heycam didn't see a strong argument either way 20:19:53 [shepazu] ... what should we do with this? close it or take it up again? 20:21:58 [shepazu] jwatt: acid3 depends on this behavior? 20:22:00 [shepazu] ed: it did at one time 20:22:03 [shepazu] ... we did change the behavior in the spec for the better 20:22:10 [ed] 20:23:20 [shepazu] shepazu: we might look at this more closely in SVG 2.0 20:23:24 [shepazu] ed: right 20:23:24 [ChrisL] i think its closed, for 1.1 20:23:40 [shepazu] jwatt: the current errata makes sense to me 20:23:54 [shepazu] resolution: close the issue 20:24:39 [shepazu] issue-2071? 20:24:40 [trackbot] ISSUE-2071 -- potential security hole involving pointer-events, filters, foreignObject, cross-origin IFRAMEs, and elementFromPoint -- OPEN 20:24:40 [trackbot] 20:24:55 [shepazu] resolution: move to SVG 2.0 20:25:03 [shepazu] issue-2113? 20:25:03 [trackbot] ISSUE-2113 -- animate-elem-35 -- OPEN 20:25:03 [trackbot] 20:25:13 [ed] ISSUE-2213? 20:25:13 [trackbot] ISSUE-2213 -- Some issues in the definition of suspendRedraw/unsuspendRedraw/forceRedraw -- OPEN 20:25:13 [trackbot] 20:25:47 [shepazu] resolution: defer to SVG 2.0 20:26:21 [shepazu] issue-2217? 20:26:21 [trackbot] ISSUE-2217 -- How units are resolved on an SVGLength is not defined -- RAISED 20:26:21 [trackbot] 20:27:34 [shepazu] Resolution: move to SVG 2.0 20:27:45 [shepazu] issue-2219? 20:27:45 [trackbot] ISSUE-2219 -- Missing test coverage for SVG 1.1 properties -- RAISED 20:27:45 [trackbot] 20:28:37 [shepazu] jwatt: we should put this off until we decide on the new test format 20:29:27 [Zakim] -Chris_Lilley 20:30:02 [shepazu] Resolution: resolve in SVG 2.0 20:30:40 [Zakim] +Chris_Lilley 20:30:56 [shepazu] issue-2259? 20:30:56 [trackbot] ISSUE-2259 -- Inconsistent use of <uri> symbol -- RAISED 20:30:56 [trackbot] 20:32:03 [ed] 20:32:11 [shepazu] ed: didn't we resolve this in second edition? 20:32:37 [shepazu] ChrisL: basically, this is rolling in changes we made in SVGT1.2 20:33:09 [shepazu] ChrisL: it should be easy to do 20:34:08 [shepazu] Action: ChrisL to make funcURI consistent, and update tests 20:34:09 [trackbot] Created ACTION-2697 - Make funcURI consistent, and update tests [on Chris Lilley - due 2009-11-30]. 20:35:18 [ed] 20:35:48 [ChrisL] issue-2263? 20:35:48 [trackbot] ISSUE-2263 -- The attributes on the SVGViewSpec interface are underspecified -- RAISED 20:35:48 [trackbot] 20:38:22 [shepazu] shepazu: we will have to reexamine this in the context of the new SVG DOM API, anyway 20:38:31 [shepazu] Resolution: defer to SVG 2.0 20:38:40 [shepazu] issue-2294? 20:38:40 [trackbot] ISSUE-2294 -- Adding an animated length attribute into a baseval length list -- RAISED 20:38:40 [trackbot] 20:39:56 [shepazu] jwatt: I was going to write an email about this... 20:40:25 [shepazu] ... maybe we need some custom error about removing items from the original list (readonly) 20:40:47 [shepazu] ... maybe we should have a "copy" method? 20:41:04 [shepazu] ... let's address this in the new DOM API 20:41:15 [shepazu] Resolution: move to SVG 2.0 20:41:22 [shepazu] issue-2299? 20:41:22 [trackbot] ISSUE-2299 -- Text on a path layout rules unclear wrt startpoint-on-the-path and text-anchor -- RAISED 20:41:22 [trackbot] 20:42:13 [shepazu] ChrisL: I don't think they are contradictory, they seem to be saying the same thing in different ways 20:42:37 [shepazu] ... it's using "start point" for 2 different things... 20:43:03 [shepazu] ... it's talking about shifting the initial start point 20:43:37 [shepazu] ed: it's ambiguous enough that implementations all do different things 20:43:55 [shepazu] ... but I'm not sure it's critical for SVG 1.1 2nd ed. 20:47:33 [shepazu] shepazu: I want this fixed because it's important... could we start a 3rd edition errata in addition to SVG 2.0? 20:47:49 [shepazu] ChrisL: we could do... we need to test all of this better, too 20:49:43 [shepazu] ed: one way to start off easy is to have a straight line as the textPath for tests that describe the start point 20:49:48 [shepazu] ChrisL: good idea 20:51:22 [shepazu] resolution: keep as SVG 1.1 3rd edition errata, but duplicate for SVG 2.0, to make sure it's addressed in both 20:52:21 [shepazu] issue-2301? 20:52:21 [trackbot] ISSUE-2301 -- Text on a path layout rules unclear wrt startpoint-on-the-path and text-anchor (svg2) -- RAISED 20:52:21 [trackbot] 20:53:18 [shepazu] ed: 23 pending actions 20:53:29 [shepazu] s23 pending actions/23 pending actions left 20:53:37 [shepazu] s/23 pending actions/23 pending actions left 20:53:58 [shepazu] s/23 pending actions/23 pending actions left/g 20:54:14 [shepazu] action-2077? 20:54:14 [trackbot] ACTION-2077 -- Erik Dahlström to test implementations for percentage values in clipPath, etc. -- due 2008-07-03 -- OPEN 20:54:14 [trackbot] 20:54:44 [shepazu] action-2203? 20:54:44 [trackbot] ACTION-2203 -- Doug Schepers to add to the 1.1 Full errata that the initial value for the root overflow property is scroll rather than hidden -- due 2008-09-30 -- OPEN 20:54:44 [trackbot] 20:55:34 [shepazu] action-2404? 20:55:35 [trackbot] ACTION-2404 -- Doug Schepers to add errata item for root overflow -- due 2009-01-22 -- CLOSED 20:55:35 [trackbot] 20:57:59 [shepazu] Topic: list activity 20:58:50 [shepazu] ed: discussion about transforms, getIntersectionList, DOM constructors, image clarification, z-index 20:59:35 [shepazu] ChrisL: I wonder if alex addressed this to their satisfaction? 20:59:46 [shepazu] ... a good example might help 21:00:44 [ChrisL] suppose a filter brings in a greyscale image with feimage, then we do an rgb fecomponenttransfer. that should work 21:00:50 [ChrisL] it wont give an error 21:01:07 [ChrisL] so that is what "as it its promoted to RGBA" means 21:01:14 [shepazu] ChrisL: when we say, "implement as if...", then things that fall out of the model still have to work 21:02:14 [shepazu] ChrisL: action to respond to image clarification email with concrete example 21:02:35 [shepazu] Action: ChrisL to respond to image clarification email with concrete example 21:02:35 [trackbot] Created ACTION-2698 - Respond to image clarification email with concrete example [on Chris Lilley - due 2009-11-30]. 21:03:31 [shepazu] ed: anthony addressed the transforms issues 21:04:31 [shepazu] ed: Dr. Olaf pointed to his older email... did we address this? 21:04:36 [shepazu] anthony: not sure 21:06:11 [shepazu] ChrisL: Dr. Olaf raised a good point, CSS didn't consider animation when they specified angles, which need to be normalized, and since they now have animation this affects them too... we should raise this with the CSS WG 21:08:29 [shepazu] anthony: I'll take a crack at replying to Dr. Olaf 21:11:12 [shepazu] Topic: DOM constructors 21:11:34 [anthony] scribe: anthony 21:11:58 [anthony] DS: Just started off with basically describing 21:12:05 [anthony] ... what we had proposed 21:12:14 [anthony] ... because he didn't read the proposal page 21:12:25 [anthony] ... just ready the 'what sort of problems we have' page 21:13:07 [anthony] ... he also proposed that we use the innerHTML method for larger document fragments 21:13:40 [anthony] ... and Brois said that innerHTML has been optimised because browsers have to parse already 21:13:47 [anthony] ... so that's one thing they optermise for 21:14:40 [anthony] ... Boris seemed to be have mixed responses about sending in a property tag for element constructors would help the performance 21:15:00 [shepazu] s/property tag/property bag object/ 21:16:03 [anthony] ... I definitely under the impression from taking to implementers that setting attributes individually was a huge performance hit 21:16:53 [shepazu] ChrisL: if it's done right, it should be a performance gain, and is never a performance hit 21:16:57 [anthony] CL: It can be an atomic operation 21:17:07 [shepazu] shepazu: and it's much nicer for authoring 21:17:07 [anthony] DS: It's much nicer for authors 21:18:25 [shepazu] shepazu: Jeff may not have considered the issue of the namespace of an element when he criticized using createElement as an element method 21:19:31 [shepazu] shepazu: jwatt, what's your impression? 21:20:11 [shepazu] jwatt: boris seemed to refute some of the performance points that Jeff pointed to 21:20:33 [shepazu] ... but it's about ease of authoring, not so much performance 21:21:09 [shepazu] ... boris says it's hard to predict performance 21:21:14 [ed] 21:21:58 [shepazu] ed: have we verified that innerHTML would work with SVG? 21:21:59 [shepazu] ... I'm reading it, and it doesn't seem great for SVG 21:22:38 [shepazu] jwatt: I can test it, don't know offhand 21:23:06 [shepazu] action: jwatt to test innerHTML for applicability to SVG 21:23:06 [trackbot] Created ACTION-2699 - to test innerHTML for applicability to SVG [on Jonathan Watt - due 2009-11-30]. 21:24:54 [shepazu] shepazu: speaking of this, what's the standard way of doing ASV's printNode? 21:25:08 [shepazu] jwatt: I have a wrapper 21:25:14 [jwatt] 21:25:41 [shepazu] ed: maybe for innerHTML, it needs to be put into the foreign-content mode... 21:26:01 [shepazu] ... otherwise you have to wrap everything in <svg> 21:28:13 [ChrisL] Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.3a1pre) Gecko/20091122 Minefield/3.7a1pre 21:28:14 [ed] <html><svg id="foo">...</svg><script>$("foo").innerHTML="<rect width='100' height='100' fill='green'/></html> 21:29:58 [shepazu] ed: anything that conflicts with HTML element names might cause a problem 21:33:20 [ed] so test <font>, <video>, <audio>, <textArea>... any others? 21:34:09 [shepazu] jwatt: about the F2F... I'm probably not coming 21:34:39 [shepazu] ... mozilla thinks 4 meetings a year is too frequent 21:35:06 [shepazu] shepazu: actually, I think it's 3 meetings a year now 21:37:28 [shepazu] shepazu: maybe we should examine who would be able to attend 21:37:51 [shepazu] ed: what about relocating it? 21:46:33 [shepazu] trackbot, end telcon 21:46:33 [trackbot] Zakim, list attendees 21:46:33 [Zakim] As of this point the attendees have been ed, Shepazu, jwatt, Chris_Lilley, [IPcaller], anthony 21:46:34 [trackbot] RRSAgent, please draft minutes 21:46:34 [RRSAgent] I have made the request to generate trackbot 21:46:35 [trackbot] RRSAgent, bye 21:46:35 [RRSAgent] I see 3 open action items saved in : 21:46:35 [RRSAgent] ACTION: ChrisL to make funcURI consistent, and update tests [1] 21:46:35 [RRSAgent] recorded in 21:46:35 [RRSAgent] ACTION: ChrisL to respond to image clarification email with concrete example [2] 21:46:35 [RRSAgent] recorded in 21:46:35 [RRSAgent] ACTION: jwatt to test innerHTML for applicability to SVG [3] 21:46:35 [RRSAgent] recorded in 21:46:36 [Zakim] -Chris_Lilley 21:46:40 [Zakim] -ed 21:46:44 [Zakim] -jwatt
http://www.w3.org/2009/11/23-svg-irc
CC-MAIN-2015-22
refinedweb
2,397
60.75
Below I will paste my code. My problem is that when I run this I get to input a int and if correct it will output that I was correct but after that it will output something that I'm not understanding which is"hack2@1f24bbbf". Any help will be greatly appreciated as I'm just beginning learning java. import java.util.Scanner; public class hack2 { public hack2 mathequation1; int answer = 10; int numc; int counter = 0; Scanner job = new Scanner(System.in);{ while(counter != answer){ numc = job.nextInt(); if(numc != answer){ System.out.println("You are wrong"); counter = numc; }else{ System.out.println("You are correct");} break; } } }
http://www.roseindia.net/answers/viewqa/Java-Beginners/19303-While-loop-break-causing-unwanted-output.html
CC-MAIN-2015-27
refinedweb
108
58.28
Many of you will use the SD card to store the sound samples and image data needed by your E100 program. The best way to store this data on the SD card is as a binary data file. This web page describes how to create and manipulate such data files. You will probably want to input sound and image data as 32-bit signed numbers. Files that store numbers directly like this are called binary files. You are probably more accustomed to plain-text files, which store ASCII representations of numbers. To understand the difference between binary data and ASCII representations of numbers, consider two ways to write the number 12 to a file. You could write 12 to a file as two characters '1' and '2'. Each of these two characters would be written by its ASCII value. '1' is represented in ASCII by the number 49, and '2' is represented in ASCII by the number 50. To read this, your E100 program would have to read two numbers (49 and 50), translate each number from ASCII to a decimal digit, then compute the number that those two characters represented. This is unnecessarily cumbersome. Instead of writing the number 12 as two characters (each represented by a number), you could simply write the number 12 as binary data to a binary file. Then your E100 program could read the single number 12 from the SD card directly into a variable. You will not be able to view or edit binary files in a normal text editor. Text editors are meant to view plain-text files, so they interpret each number as an ASCII representation of a character. For example, if you used an editor to view a binary file that contained the number 12, your editor would try to display the character represented by ASCII value 12 (which is the line feed character and is unprintable). Instead, you will need to view the file with a program that is intended to handle binary data, such as hexdump or od. For example, you could view (in decimal) each 4-byte word of data.bin by running: od -Ad -t d4 data.bin You will probably need to manipulate binary data files, e.g. to combine sound samples or to add header information such as the number of samples. To manipulate binary data, you will probably need to write your own programs, e.g., in C++ or Matlab. In C++, you can read and write binary data through the fstream interface's read and write methods. For example, the following program writes the 32-bit number 12 as binary data to a file. #include <fstream> #include <stdint> using namespace std; main() { ofstream file; int32_t number = 12; file.open("datafile", ios::out | ios::binary); file.write( (char *) &number, sizeof(number) ); } In MATLAB, you can use the fread and fwrite functions to read and write binary data (remember that this is 32-bit signed data, so use the 'int32' precision). For example, the following program writes the 32-bit number 12 as binary data to a file. number = 12; fid = fopen('datafile', 'w'); fwrite(fid, number, 'int32'); fclose(fid); The Unix utility cat can concatenate multiple files into a single file. E.g., to concatenate three files file1.bin, file2.bin, and file3.bin into a single, combined file all.bin, run the following command: cat file1.bin file2.bin file3.bin > all.bin
https://web.eecs.umich.edu/~pmchen/engr100/binary_data.html
CC-MAIN-2018-05
refinedweb
571
64.2
C Run-Time Libraries The following table lists the release versions of the C run-time library files, along with their associated compiler options and environment variables. Prior to Visual C++ 4.2, the C run-time libraries contained the iostream library functions. In Visual C++ 4.2 and later, the old iostream library functions have been removed from LIBC.LIB, LIBCMT.LIB, and MSVCRT: LIBCI.LIB, LIBCIMT.LIB, and MSVCIRT.LIB. The new iostream functions, as well as many other new functions, exist in the Standard C++ libraries: LIBCP.LIB, LIBCPMT.LIB, and MSVCPRT.LIB. The Standard C++ library and the old iostream library are incompatible, and only one of them can be linked with your project. See Port to the Standard C++ Library and the for details.: - If you include a in your code, a Standard C++ library will be linked in automatically by Visual C++ at compile time. For example: #include <ios> - If you include an old iostream library header, an old iostream library will be linked in automatically by Visual C++ at compile time. For example: #include <ios.h> Note that headers from the Standard C++ library and the old iostream library cannot be mixed. environment variables are automatically defined. To build a debug version of your application, the _DEBUG flag must be defined and the application must be linked with a debug version of one of these libraries. For more information about using the debug versions of the library files, see C Run-Time Debug Libraries.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-6.0/aa272081(v=vs.60)?redirectedfrom=MSDN
CC-MAIN-2019-43
refinedweb
251
66.64
Working with File-like Objects¶ The first argument passed to a FileType or Metadata can either be a file name or a file-like object, such as BytesIO and mutagen will figure out what to do. MP3("myfile.mp3") MP3(myfileobj) If for some reason the automatic type detection fails, it’s possible to pass them using a named argument which skips the type guessing. MP3(filename="myfile.mp3") MP3(fileobj=myfileobj) Mutagen expects the file offset to be at 0 for all file objects passed to it. The file-like object has to implement the following interface (It’s a limited subset of real buffered file objects and StringIO/BytesIO) class IOInterface(object): """This is the interface mutagen expects from custom file-like objects. For loading read(), tell() and seek() have to be implemented. "name" is optional. For saving/deleting write(), flush() and truncate() have to be implemented in addition. fileno() is optional. """ # For loading def tell(self): """Returns he current offset as int. Always >= 0. Raises IOError in case fetching the position is for some reason not possible. """ raise NotImplementedError def read(self, size=-1): """Returns 'size' amount of bytes or less if there is no more data. If no size is given all data is returned. size can be >= 0. Raises IOError in case reading failed while data was available. """ raise NotImplementedError def seek(self, offset, whence=0): """Move to a new offset either relative or absolute. whence=0 is absolute, whence=1 is relative, whence=2 is relative to the end. Any relative or absolute seek operation which would result in a negative position is undefined and that case can be ignored in the implementation. Any seek operation which moves the position after the stream should succeed. tell() should report that position and read() should return an empty bytes object. Returns Nothing. Raise IOError in case the seek operation asn't possible. """ raise NotImplementedError # For loading, but optional @property def name(self): """Should return text. For example the file name. If not available the attribute can be missing or can return an empty string. Will be used for error messages and type detection. """ raise NotImplementedError # For writing def write(self, data): """Write data to the file. Returns Nothing. Raises IOError """ raise NotImplementedError def truncate(self, size=None): """Truncate to the current position or size if size is given. The current position or given size will never be larger than the file size. This has to flush write buffers in case writing is buffered. Returns Nothing. Raises IOError. """ raise NotImplementedError def flush(self): """Flush the write buffer. Returns Nothing. Raises IOError. """ raise NotImplementedError # For writing, but optional def fileno(self): """Returns the file descriptor (int) or raises IOError if there is none. Will be used for mmap if available. """ raise NotImplementedError Gio Example Implementation¶ The following implements a file-like object using PyGObject and Gio. It depends on the giofile Python library. import mutagen import giofile from gi.repository import Gio gio_file = Gio.File.new_for_uri( "") cancellable = Gio.Cancellable.new() with giofile.open(gio_file, "rb", cancellable=cancellable) as gfile: print(mutagen.File(gfile).pprint()) $ python example.py Ogg Opus, 228.11 seconds (audio/ogg) ENCODER=opusenc from opus-tools 0.1.5 artist=Ehren Starks title=Paper Lights album=Lines Build Walls date=2005-09-05 copyright=Copyright 2005 Ehren Starks license= organization=magnatune.com
https://mutagen.readthedocs.io/en/latest/user/filelike.html
CC-MAIN-2020-16
refinedweb
557
51.04
Compare two dictionaries to find the added and modified entries #include <sys/strm.h> strm_dict_t* strm_dict_compare(strm_dict_t *newdict, strm_dict_t const *olddict) This function compares two dictionaries to find the added and modified entries. It creates a replica of the one specified by newdict but removes all entries that have matching keys and values in olddict. In other words, if olddict is the older version of some dictionary and newdict is the newer version, the resulting dictionary contains all entries that were added or modified, but not the ones that were left alone or deleted. Note that the function destroys the newdict handle, even on failure. If the same handle is passed in for both arguments, the function destroys it and returns a new handle to an empty dictionary. Otherwise, the olddict handle is preserved. On success, a handle to the dictionary containing the result of the comparison. On failure, a null pointer.
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.libstrm/topic/strm_dict_compare.html
CC-MAIN-2022-33
refinedweb
152
54.63
GraphQL Modules — Feature based GraphQL Modules at scale Explore our services and get in touch. Today we are happy to announce that we are open sourcing a framework we’ve been using for the past couple of months in production — GraphQL Modules! Yet another framework? well, kind of.. GraphQL Modules is a set of extra libraries, structures and guidelines around the amazing Apollo Server 2.0. You can and should use them as completely separate packages, each one is good for different use cases, but all together they represent our current philosophy of building large scale GraphQL servers. We would love to get feedback from the Apollo team and if they wish to use those ideas and integrate them into Apollo Server we would love to contribute. That’s why we’ve developed it as a set of independent tools under a single monorepo. The basic concept behind GraphQL Modules is to separate your GraphQL server into smaller, reusable, feature based parts. A basic and initial implementation of a GraphQL server usually includes: A more advanced implementation will usually use a context to inject things like data models, data sources, fetchers, etc, like Apollo Server 2.0 provides us with: Usually, for simple use cases, the example above will do. But as applications grow, their code and schematic relationships becomes bigger and more complex, which can make schema maintenance a hard and agonizing thing to work with. Some of the more old school, MVC frameworks adds few more layers after the resolvers layer, but most of them just implement separation based technical layers: controllers, models, etc. We argue that there is a better approach of writing your GraphQL Schema and implementation. We believe you should separates your GraphQL schema by modules, or features, and includes anything related to a specific part of the app under a “module” — which is just a simple directory. Each one of the GraphQL Modules libraries would help you in the gradual process of doing that. The modules are being defined by their GraphQL schema — so we’ve taken the “GraphQL First” approach lead by Apollo and combined it together with classic modularization tooling to create new ways of writing GraphQL servers! The GraphQL Modules toolset has tools to help with: - Schema Separation — declare your GraphQL schema in smaller pieces, which you can later move and reuse. - Tools designed to create independent modules — each module is completely independent and testable, and can be shared with other applications or even open sourced if needed - Resolvers Composition — with GraphQL Modules you can write your resolver as you wish, and let the app that hosts the module to wrap the resolvers and extend them. It’s implemented with a basic middleware API, but with more flexibility. That means that you can, for example, implement your entire module without knowing about the app authentication process, and assume that currentUserwill be injected for you by the app. - A clear, gradual path from a very simple and fast, single-file modules, to scalable multi-file, multi-teams, multi-repo, multi-server modules. - A scalable structure for your GraphQL servers — managing multiple teams and features, multiple microservices and servers and more advanced tools, which you can choose to include when your schema gets into massive scale: - Communication Bridge — We also let you send custom messages with payload between modules — that means you can even run modules in different microservices and easily interact between them. - Dependency Injection — Implement your resolvers, and later, only when you see fit, improve their implementation by gradually introducing dependency injection. It also includes richer toolset around testing and mocking. A Basic Example In the following example, you can see a basic implementation for GraphQL Modules server, with 2 modules: User and Chat. Each module declares only the part that is relevant to it, and extends previously declared GraphQL types. So when User module is loaded, the type User is created, and when Chat module is loaded, the type User being extended with more fields. import { GraphQLModule } from '@graphql-modules/core'; import { UserModule } from './user-module'; import { ChatModule } from './chat-module'; export const appModule = new GraphQLModule({ imports: [UserModule, ChatModule], }); import { GraphQLModule } from '@graphql-modules/core'; import gql from 'graphql-tag'; export const ChatModule = new GraphQLModule({ typeDefs: gql` # Query declared again, adding only the part of the schema that relevant type Query { myChats: [Chat] } # User declared again- extends any other `User` type that loaded into the appModule type User { chats: [Chat] } type Chat { id: ID! users: [User] messages: [ChatMessage] } type ChatMessage { id: ID! content: String! user: User! } `, resolvers: { Query: { myChats: (root, args, { getChats, currentUser }) => getChats(currentUser), }, User: { // This module implements only the part of `User` it adds chats: (user, args, { getChats }) => getChats(user), }, }, }); import { appModule } from './modules/app'; import { ApolloServer } from 'apollo-server'; const { schema, context } = appModule; const server = new ApolloServer({ schema, context, introspection: true, }); server.listen(); import { GraphQLModule } from '@graphql-modules/core'; import gql from 'graphql-tag'; export const UserModule = new GraphQLModule({ typeDefs: gql` type Query { me: User } # This is a basic User, with just the basics of a user object type User { id: ID! username: String! email: String! } `, resolvers: { Query: { me: (root, args, { currentUser ) => currentUser, }, User: { id: user => user._id, username: user => user.username, email: user => user.email.address, }, }, }); You can and should adopt GraphQL Modules part by part, and you can try it now with your existing GraphQL server. What does a “module” contain? - Schema (types declaration) — each module can define its own Schema, and can extend other schema types (without explicitly providing them). - Thin resolvers implementation — each module can implement its own resolvers, resulting in thin resolvers instead of giant files. - Providers — each module can have its own Providers, which are just classes/values/functions that you can use from your resolvers. Modules can load and use providers from other modules. - Configuration — each module can declare a strongly-typed config object, which the consuming app can provide it with. - Dependencies — modules can be dependent on other modules (by its name or its GraphQLModuleinstance, so you can easily create an ambiguous dependency that later could be changed). GraphQL Modules libraries GraphQL Modules is built as a toolkit, with the following tools, which you should individually and gradually adopt: @graphql-modules/epoxy - That will probably be the first tool you want to introduce into your server. The first step into organizing your server in a feature based structure - Epoxy is a small util that manages the schema merging. it allow you to merge everything in your schema, starting from types to enums, unions, directives and so on. - This is an important feature of GraphQL Modules — you can use it to separate your GraphQL types to smaller pieces and later on combine them into a single type. - We took the inspiration from merge-graphql-schemas, and added some features on top of it to allow custom merging rules to make it easier to separate your schema. @graphql-modules/core - Resolvers Composition — manages the app’s resolvers wrapping - Context building — each module can inject custom properties to the schema, and other modules can use it (for example, auth module can inject the current user, and other modules can use it) - Dependency injection and module dependencies management — when you start, there is no need of using DI is your server. but when your server gets big enough with a large number of modules which depends on each other, only then, DI becomes a very help thing that actually simplifies your code a lot. USE ONLY WHEN NECESSARY ;) You can find more tooling at your disposal like: @graphql-modules/sonar — a small util that helps you to find GraphQL schema and resolvers files, and include them. @graphql-modules/logger — a small logger, based on winston 3, which you can easily use in your app. First thing, don’t go full in! Start by simply moving your code into feature based folders and structures with your existing tools. Then head over to and check out our tools and use them only when you see that they solve a real problem for you! (for us it has) Also check out the repo’s README and a number of example apps. You probably have many questions — How does this compare to other tools, how to use those libraries with X and so on. We will publish a series of blog posts in the coming weeks that will dive deep into each of the design decisions made here, so we want to hear your thoughts and questions, please comment here or on the Github repository! Going to GraphQL Summit? I will be there and would love to get your questions and feedback on behalf of our team. All those tools were built by a passionate group of individual open source developers, otherwise known as The Guild. Below there is a section of more deep dive thoughts that we will publish separate posts about in the coming weeks: Core Concepts and deep dive Modularizing a schema Everyone is talking about schema stitching and GraphQL Bindings. Where does that fit into the picture? Schema stitching is an amazing ability and concept, which helps you merge separated GraphQL servers into a single endpoint and opens up a lot of exciting use cases. But, with all the excitement, we’ve missed something much more basic than that — sometimes we still want to work on a single logical server, but we just want to separate the code according to features. We want to be able to do most of the merging work at build time, and only if really necessary, do the rest of the merging at runtime as a last resort. We want to split the code into separate teams and even create reusable modules which define their external APIs by a GraphQL Schema. Those modules can be npm modules, microservices or just separate folders inside a single server. Separating your schema to smaller parts is easier when you are dealing with typeDefs and resolvers— it’s more readable and easy to understand. We also wanted to allow developers to extend only specific types, without creating the entire schema. With GraphQL schema, you have to specify at least one field under Query type, which is something that we did not want to enforce on our users. We see our approach as complementary to Schema Stitching and works together with it. Feature-based implementation One of the most important things in GraphQL Module’s approach is the feature-based implementation. Nowadays, most frameworks are separating the layers based on the role of the layer — such as controllers, data-access and so on. GraphQL Modules has a different approach — separate to modules based on your server’s features, and allow it to manage its own layers within each module implementation. It’s easier to think about apps in a modular way, for example: Your awesome app needs a basic authentication, users management, user profiles, user galleries and a chat. Each one of these could be a module, and implement its own GraphQL schema and its own logic, and it could depend on other modules to provide some of the logic. Here’s a simple example for a GraphQL Schema as we described: But if we think of apps in terms of features and then separate the schema by module, the modules separation will look like so: This way, each module can declare only the part of the schema that it contributes, and the complete schema is a representation of all merged type definitions. Module can also depend, import and extend and customize the contents on other modules (for example, User module, comes with Auth inside it) The result of course, will be the same, because we are merging the schema into a single one, but the codebase will be much more organized and each module will have its own logic. Reusability of backend modules So now that we understood the power of feature-based implementation, it’s easier to grasp the idea behind code reusability. If we could implement the schema and the core of Auth and User module as “plug-and-play” — we will be able later to import it in other projects, with very minor changes (using configuration, dependency injection, or module composition). How could we reuse complete modules that hold part of a schema? For example, let’s take a User type. Most of User type schemas will contain id, username fields. The Mutation type will have login and the Query will have user field to query for a specific user. We can re-use this type declaration. The actual implementation might differ between apps, according to the authentication provider, database and so on, but we can still implement the business logic in a simple resolver, and use dependency injector and ask the app that’s using the module to provide the actual authentication function (of course, with a complete TypeScript interface so we’ll know that we need to provide it ;) ). Let’s take it one step further. If we would like to add a profile picture to a user, we can add a new module named UserProfile and re-declare the User and Mutation types again: type User { profilePicture: String } type Mutation { uploadProfilePicture(image: File!): User } This way, GraphQL Modules will merge the fields from this User type into the complete User type, and this module will only extend the User type and Mutation type with the required actions. So let’s say that we have the schema — how can we make this module generic and re-use it? This is how you declare this module: import { GraphQLModule } from '@graphql-modules/core'; import gql from 'graphql-tag'; import { UserModule } from '../user'; import { Users } from '../user/users.provider'; export interface IUserProfileModuleConfig { profilePictureFields?: string; uploadProfilePicture: (stream: Readable) => Promise<string>; } export const UserProfileModule = new GraphQLModule<IUserProfileModuleConfig>({ imports: [UserModule], typeDefs: gql` type User { profilePicture: String } type Mutation { uploadProfilePicture(image: File!): User } `, resolvers: (config) => ({ User: { profilePicture: (user: User, args: never, context: ModuleContext) => { const fieldName = config.profilePictureField || 'profilePic'; return user[fieldName] || null; }, }, Mutation: { uploadProfilePicture: async ( root: never, { image }: { image: any }, { injector, currentUser }: ModuleContext ) => { // using const { stream } = await image; // Get the external method for uploading files, this is provided by the app as config const imageUrl = config.uploadProfilePicture(stream); // Get the field name const fieldName = config.profilePictureField || 'profilePic'; // Ask the injector for "Users" token, we are assuming that `user` module exposes it for us, // then, update the user with the uploaded url. injector.get(Users).updateUser(currentUser, { [fieldName]: imageUrl }); // Return the current user, we can assume that `currentUser` will be in the context because // of resolvers composition - we will explain it later. return currentUser; }, }, }), }); We declare a config object, and the app will provide it for us, so we can later replace it with a different logic for uploading. Scaling the codebase Now that we broke our app into individual modules, once our codebase grows, we can scale each module individually. What do I mean by scaling a codebase? Let’s say we start to have code parts we want to share between different modules. The current way of doing it in the existing GraphQL world is through a GraphQL context. This approach has proven itself to work, but at some point it becomes a big hassle to maintain, because GraphQL context is an object, which any part of the app can modify, edit and extend, and it can become really big pretty quickly. GraphQL modules let each module extend and inject fields to the `context` object, but this is something that you should use with caution, because I recommend the `context` to contain the actual `context` — which contains data such as global configuration, environment, the current user and so on. GraphQL modules only adds one field under the context, called injector which is the bridge that lets you access your GraphQLApp and the application Injector, and it lets you fetch your module’s config and providers. Modules can be a simple directory in a project or in a monorepo, or it could be a published NPM module — you have the power to choose how to manage your codebase according to your needs and preferences. Dependency Injection GraphQL Modules’ dependency injection is inspired by .NET and Java’s dependency injection which has proven itself to work pretty well over the years. With that being said, there were some issues with .NET and Java’s APIs, which we’ve tried to list and go through. We ran into some pretty interesting conclusions. We’ve learn that it’s not something that should be forced. Dependency injection makes sense in some specific use cases and you should need to use it only when it’s necessary and when it helps you move faster. So this concept should come more and more in handy as we scale up, we can simplify things, maintain our code with ease and manage our teams’ contributions! Having GraphQL Modules deployed across all of our Enterprise customers while also being used on our smaller applications, lead us to believe that we’ve found the optimal point of where you should use the concept of dependency injection, and when not. We’ve also came with the optimal API for dependency injection. It’s extremely easy to understand, and use. After a long research of the existing dependency injection solutions for JavaScript, we’ve decided to implement a simple Injector, that supports the needs of GraphQL-Modules ecosystem, and support circular dependencies and more. We’ve simplified the Dependency Injection API and exposed to you only the important parts, that we believe that are necessary for a GraphQL server development. Authentication Check out the related blog post we wrote about it: /blog/graphql-modules-auth Testing and mocking On our Enterprise applications, when we started using dependency injection, we no longer had to manage instances and bridge them together. We gained an abstraction that allowed us to test things easier and mock all http requests. Yes, mocking. DI really shines here. Thanks to mocking we can simulate many scenarios and check the backend against them. And when your codebase grows, you need to start thinking about managing dependencies between modules and how to avoid things like circular dependencies — unless you use DI which solves that problem for you. With the power of dependency injection, you can easily create a loose connection between modules, and base this connection on a token and on a TypeScript interface. It also means that testing is much easier — you can take your class/function and test it as an independent unit, and mock its dependencies easily. Summary We see GraphQL Modules as the framework that finally being built from the ground up on the new and exciting capabilities of GraphQL and Apollo, while combining it in the right way with good old software best practices for scale like modularizations, strong typings and dependency injection. Now go and try it out
https://the-guild.dev/blog/graphql-modules
CC-MAIN-2021-31
refinedweb
3,140
56.89
![endif]--> Arduino <![endif]--> Buy Download Products Arduino AtHeart Certified Learning Getting started Examples Reference Playground Forum Support Advanced Search | Arduino Forum :: Members :: Matchlighter Show Posts Pages: [ 1 ] 2 3 1 Development / Other Software Development / Re: Library "... does not define a type" on: November 28, 2013, 02:15:49 pm I've done circular deps before and been just fine, but removing them fixes this... C++... 2 Development / Other Software Development / Re: Library "... does not define a type" on: November 28, 2013, 03:20:09 am Thanks for the reply, but I don't think that's it; the errors aren't the same, and, as I stated, Line 59 throws no errors, even with 58 commented. Why would Time work and Date not? 3 Development / Other Software Development / Library "... does not define a type" on: November 27, 2013, 05:59:41 pm I've been working on my own time and date library recently. I am just about finished, but I've hit a snag that I just can't figure out. I'm getting a "... does not define a type" and I, obviously, have no idea why. Here's a link to the project GitHub, to keep the post small: The error is occurring on line 58 of DateTime.h, saying that "'Date' does not define a type". Line 59, does not have a problem, even though Time.h and Date.h are the same syntactically. What am I doing wrong? Thanks! ML 4 Using Arduino / Networking, Protocols, and Devices / Re: WiFly Solid Green Block on: June 29, 2013, 04:27:47 pm Yup, forgot to do that, sorry. See OP. 5 Using Arduino / Networking, Protocols, and Devices / WiFly Solid Green Block on: June 29, 2013, 02:07:59 pm I am having an issue with my project involving my WiFly. After a little while, the green light stops blinks and just stays on (which apparently means that it has an active TCP connection). When this happens, I have to reset the Arduino in order to perform any more communications with the WiFly. I discovered an old post ( ), but restarting the browser doesn't help (as it did in his/her case), and he/she doesn't see fit to share the final solution. I did find one thing that is interesting: when I open DD-WRT on my router, I find that the WiFly has an active UDP connection to 255.255.255.255 on port 55555 that has a status of 'UNREPLIED' and a renewing timeout. EDIT: However, this appears to be open even when things are working. (Hopefully) needless to say, I have Googled the problem, but haven't found anything. Has anybody encountered this before and found the solution? EDIT: Here's the code. I've cleaned it up a little for the post, but all the WiFly stuff is present. Code: #include <Roomba.h> #include <SPI.h> #include "WiFly.h" #include "TwitterWiFly.h" #include "Credentials.h" Twitter twitter(twitter_oAuth); Server server(80); #define MinutesBetweenConnectionTries 5 //Time to wait between connection re-atempts, in minutes void setup() { WiFly.begin(); //Initialize the WiFly while (!TryWiFly()) { delay(MinutesBetweenConnectionTries*60000); //If the connection failed, wait minutes defined and try again } randomSeed(analogRead(0)); //Make things really random server.begin(); pinMode(8, OUTPUT); } uint8_t buf[52]; void loop(){ bool ret = roomba.getSensors(3, buf, 10); //roomba.pollSensors(buf, 52); if (ret) { //ret is true when the checksum is correct //23,24/25,26 battery_Current_mAh = buf[7]+256*buf[6]; battery_Total_mAh = buf[9]+256*buf[8] | 0b00000001; //if (battery_Total_mAh == 0) { // battery_Total_mAh=1; //}; //Don't want to try and divide by zero. battery_percent = battery_Current_mAh*100/battery_Total_mAh; } CheckWebServerClients(); //Check for clients to the Webserver delay(10); //No need to go faster. Roomba only checks its sensors every 15ms. Going faster will only slow the Roomba down. } void CheckWebServerClients() { //Mostly Sample code here Client client = server.available(); if (client) { // an http request ends with a blank line boolean current_line_is_blank = true; String requestString = String(""); while (client.connected()) { if (client.available()) { char c = client.read(); if (requestString.length() < 30) { //read char by char HTTP request requestString.concat(c); } //store characters to string // if we've gotten to the end of the line (received a newline // character) and the line is blank, the http request has ended, // so we can send a reply if (c == '\n' && current_line_is_blank) { if (requestString.indexOf("/SeekDock") > 0) { roomba.coverAndDock(); } if (requestString.indexOf("/BeginClean") > 0) { roomba.cover(); } // send a standard http response header client.println("HTTP/1.1 200 OK"); client.println("Content-Type: text/html"); client.println(); client.print("<a style=\"color:rgb("); long PercentToByte = map(battery_percent,0,100,0,255); //Fancy bit of code to fade the text color like the Roomba's power LED client.print(255-PercentToByte); client.print(","); client.print(PercentToByte); client.print(",0)\">"); client.print("Battery mAh is "); client.print(battery_Current_mAh); client.print(" of "); client.print(battery_Total_mAh); client.print(" ("); client.print(battery_percent); client.print("%)"); //client.println("<br />"); client.print("</a>"); client.println("<br />"); client.print("Charging State: "); client.print(ChargeTypes[chargingState]); client.println("<br />"); client.println("<a href=\"/BeginClean\"> Clean </a>"); client.println("<a href=\"/SeekDock\"> Dock </a>"); break; } if (c == '\n') { // we're starting a new line current_line_is_blank = true; } else if (c != '\r') { // we've gotten a character on the current line current_line_is_blank = false; } } } // give the web browser time to receive the data delay(100); client.stop(); } } 6 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: November 01, 2012, 05:00:32 am Quote from: fungus on October 15, 2012, 01:46:13. OK, I've requested a couple today...let's see what happens. Quote from: JoeN on October 15, 2012, 11:09:19 am Yep, controlling it is quite tricky. The TLC library is quite well done though and takes all the work out of it. I've done it with just three pins on an ATtiny85: Did you receive the samples? How'd it go? 7 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 15, 2012, 12:29:05 as well because it seems to me that you can't load dot correction data or change the source for dot correction data with these just tied to high and low respectively like the example in Arduino Playground. Arduino Playground says tie DCPRG to high which means take dot correction information from the DC register but tie VPRG low which means you cannot program the DC register. But the DC register is not guaranteed to be set to any particular value at startup ("The values in the input shift register, DC register and GS register are unknown just after power on."). How that even works I have no idea. The MAX7219 is a breeze to use compared to the TLC5940 but no PWM which makes me sad . This guy controls the extra two lines in his example and uses it to explicitly set GS data: If I remember correctly, the DC register/Dot Correction (DC?) is a different method of limiting current, so I should make sure to set that before using the chip? I suppose I will do as you suggested with more samples. I'll stress test one at 5V for a little longer this time as well - I never had problems with 5V until after the chip burned out, but maybe I didn't go long enough. Be nice if there was a MAX7219 for PWM. I have never used it, but I don't know how anything can be as bad of an experience as the TLC5940... 8 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 14, 2012, 03:38:32 pm Quote from: James C4S on October 14, 2012, 12:38:33 pm Another possibility is that the chips are getting ESD damage. From where? They work fine for a minute. And then spontaneously (or so it seems) stop working. I don't touch the chip or move the bar at all from the time I plug it in to the time that it stops working. And, I could be wrong, but it seems unlikely that a "3rd party" source could knock out two chips. 9 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 13, 2012, 05:55:36 pm Quote from: James C4S on October 13, 2012, 05:14:59 pm Where did you buy/get the chips from? Sampled them from Ti. 10 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 12, 2012, 06:16:17 pm Quote from: James C4S on October 12, 2012, 05:00:38 pm Assuming a (low) 3V forward voltage for the LEDs, that leaves a 3V drop on the output of the 5940. Iref is setting the output to 20mA so that means the chip is dissipating 3V * 20mA = 60mW per channel. Even with 10 channels you're only at 600mW worst case, which is well below the 3W the DIP package is rated to handle at 25C. Overall your use of the chip should not result in them failing. I have never had a TLC5940 fail when used properly. So let's look for some secondary things that might be caused unexpected problems: 1. What is supply the 5V to the TLC5940 and are you using decoupling capacitors? 2. What is the supply used for the LEDs and what is its current rating? Is it a switching supply? Does it have a known minimum load requirement? Have you measured it while the LEDs are operating? 3. Have you checked with an ohm meter to ensure you picked your resistors for Iref correctly? 1. I'm using a 5V regulator and decoupling capacitors. (I think I remember the regulator getting warm, but for 12 > 5, that shouldn't be too unusual?) 2. It's a 12V switching mode PSU. Rated for up to 600mA. Got it from 3. The two resistors total 2k, if that is what you mean. (If not, please explain) Thanks for your time in helping me! 11 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 12, 2012, 04:09:35 pm Quote from: James C4S on October 12, 2012, 07:37:35 am Quote from: Matchlighter on October 11, 2012, 10:34:19 am I am getting confusing reports here. Some say that I am within tolerance, and some say I am far out of it. That is the nature of a free support forum. You sometimes get people answering questions with only bits of understanding. Most people do not understand how the TLC5940 works and give (sometimes very) bad advice. Quote from: Matchlighter on October 11, 2012, 07:16:24 pm Should I just use a 100R resistor between the TLC and the LEDs? Absolutely not. That's what the Iref pin does, it sets the current limit. The TLC5940 is a constant current source. It will act like a resistor to limit or allow current to flow, based on the Iref pin. There is a forumla that is used to determine the current based on an internal voltage reference and the resistor being used. The 5940 does NOT provide a voltage on its output pins. So your voltage supply must be large enough to drop the forward voltage of all of the LEDs connected to a channel AND leave some small voltage drop for the output of the 5940. Otherwise, it will not be able to limit current correctly. I am not sure I understand how you are wiring up your LEDs. Could you please provide a diagram of how your LEDs are connected to the TLC5940 and what resistor value you are using for Iref? Here. 12 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 11, 2012, 07:16:24 pm Quote from: funkyguy4000 on October 11, 2012, 03:18:22 pm You do not need to get the transistors, if you are driving the light bars at 12V at 20mA, you should be fine as long as the light bars are made up of 4 or 5 leds, your voltage drop should be enough for the tlc5940 to handle. So basically you are saying that 2V per channel is too much? That is not enough to drive a 4th LED. Should I just use a 100R resistor between the TLC and the LEDs? 13 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 11, 2012, 10:34:19 am Quote from: Grumpy_Mike on October 11, 2012, 07:52:51 am If the chip is not getting hot then what about the power supply you are driving it from. It could be that this is shutting down. Make sure the ground of this supply is connected to th ground of the TLC chip and that that is connected to the arduino ground. I think I can rule that out. I have taken the chips and put them back in a breadboard design (that worked before) and they exhibit the same behavior. I am getting confusing reports here. Some say that I am within tolerance, and some say I am far out of it. What is the case here? Should I just invest in some PNP transistors and resistors to drive the LEDs? 14 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: October 11, 2012, 03:02:41 am Again, sorry for the belated post. *adds direct bookmark* Quote from: JoeN on July 24, 2012, 03:50:39 pm Quote from: James C4S on July 23, 2012, 05:45:19 pm Quote from: Matchlighter on July 20, 2012, 11:02:40 pm I am only running the LED clusters at 12V @ 20mA. How are you setting the current and what voltage are the LED clusters dropping, off that 12V? It's a TLC5940, you set the current via a resistor on the TLC5940 itself and hook the cathodes of the displays to the TLC5940. If you are using a matrix you have to control the top side of the matrix as well using PNP/P-channel transistors or MIC5891 or whatever. If you are just controlling 16 individual LEDs you just connect the anode to +V. "I am only running the LED clusters at 12V @ 20mA" The resistor sets channel maximum current, not "cluster" current. There are 16 channels. 16 channels @ 20mA @ 100% PWM = 320mA. Free air maximum output current for the package is 130mA, supposedly. I am not good at reading these parts of the specs sheets, can someone take a second look at this? But you say it is not getting hot. Mine were definitely getting hot while I abused them. By 'cluster' I mean 3 LEDs in sequence. So +12V -> LED -> LED -> LED -> TLC5940. This is essentially the same as one LED, right? Quote from: James C4S on August 15, 2012, 04:34:05 pm Quote from: funkyguy4000 on August 15, 2012, 12:38:56 am Yes, quite enough. The absolute maximum settings is 6v. There is a table that says "Absolute Maximum Settings". That's for VCC and not for the chip's outputs. The outputs are rated to 18V. Ok, that's what I thought. This is strange. Is it possible I just got a bad sample? 15 Using Arduino / LEDs and Multiplexing / Re: TLC5940 Dropped Output on: August 14, 2012, 11:39:27 pm Sorry I haven't followed my own thread here. I expect to get email updates... Anyway, each LED is 3.2-3.6v, so together that is 9.6-10.8v. Is that enough for the DIP version of the chip to be overheating? Also, if I remember correctly, I don't remember the chip being hot. Nor do I remember running more than one cluster at a time when the chips stopped working. However, I am probably wrong. I guess it's time to break out the transistors and external resistors again... Ethan P.S. Thanks for the link. I will have to look at it more in depth when I get some time. Pages: [ 1 ] 2 3 | SMF © 2013, Simple Machines Newsletter ©2014 Arduino
http://forum.arduino.cc/index.php?action=profile;u=16099;sa=showPosts
CC-MAIN-2014-42
refinedweb
2,676
73.78
Process for contact form inquiry case managementscratchmybrain Apr 21, 2012 7:44 AM Gday BC users I have a client who is keen to use BC email the way BC intends it to be used. However, I have not got a good explanation of how this is done. For example, when a customer submits a contact form enquiry, my client receives a workflow notification, who then opens the secure link which takes him directly into the case. He then uses the BC reply from within that case area. Then the original customer receives the replied email with a chance to 'reply above this line' and start a thread of emails. After the customer does reply, the subsequent emails only occur within the email clients, and not BC. Is this to be expected? I thought all emails would be kept within the database, and a thread would take place that is on record within BC. Can anyone point me to a page that explains the process that the BC system intends the client to perform? cheers Darryl 1. Re: Process for contact form inquiry case managementLiam Dilley Apr 21, 2012 5:19 PM (in response to scratchmybrain)1 person found this helpful Hi there Darryl, The emails are now replies, same thing happens in systems like basecamp. If you go to the case you should see the messages there. One thing to note is do not treat things as "BC system intends the client to peform" NO system should force a client into a way of running. Yep a system works as it works and you as a business help your clients business not just their site to mould and improve into their new setup but the system is not like "DO IT THIS WAY". You got a number of ways you can run it and it is all dependent on the clients needs and how they prefer to do things. You can not even reply in Business catalyst if you like and your client just run from the workflow email without logging into BC if they choose to do it that way. It is up to them and yourself on how you from their workflow with the website which suits them the best. 2. Re: Process for contact form inquiry case managementscratchmybrain Apr 21, 2012 6:39 PM (in response to Liam Dilley) Gday Liam, thanks for your help with this, and my prior post in the old forum system where you also helped me. Those forums of course became archived and I couldn't keep that one going. I do love that BC allows many ways to achieve things, not one strictly enforced way. My problem is that I can't find documentation on how to do it any way. When I reply from my workflow messages, the emails stay within the realm of email client systems. I have a client who would now prefer to use the method that is like Basecamp (I use Basecamp). However, I can't work it out. For example, when I reply to the customer's inquiry from within the Case area (via the 'New Message' button), and then the client replies in return (above the line), it doesn't show up in the Cases area. If these steps are correct, and the message is actually meant to show up in the cases thread, I'm stumped, and perhaps this is rather a support issue? cheers Darryl 3. Re: Process for contact form inquiry case managementmario_gudelj Apr 22, 2012 12:47 AM (in response to scratchmybrain) We use our CST in support and this should certainly work. Please submit a detailed case with support and they will help you troubleshoot this. Cheers, Mario 4. Re: Process for contact form inquiry case managementscratchmybrain Apr 24, 2012 1:21 AM (in response to mario_gudelj) Gday Mario and Liam I have read about the CST in the BC knowledgbase now, and I can see that it does need to be setup. I was mistakenly under the impression that the database-collected emails were setup by default, so thought it mysterious that they weren't doing so. Thanks for that, I can see that it does need to be set up, so I'll jump into that, I'm keen to get it enabled. cheers Darryl 5. Re: Process for contact form inquiry case managementBcvogue. Jun 7, 2012 5:38 AM (in response to scratchmybrain) Hi Darry, Can you share how you got that fixed? Customer is using Google Apps email. Thx. Lalit 6. Re: Process for contact form inquiry case managementscratchmybrain Jun 12, 2012 6:25 PM (in response to Bcvogue.) Gday Lalit I missed your post, not sure how, my apologies. I did not follow up with creating the CST, it seemed overkill for my client. So they have to 'live with' getting emails only within their email client, and not showing up in their cases. If anyone can point out another way that is not wholly CST, I'd love to hear it. Liam's explanation mentioned it being like Basecamp (ie. in Basecamp you CAN reply via just using email (rather than logging in to Basecamp), and the message is also added to the system, viewable in the system, archived...know what I mean), but I haven't been able to experience BC in that way. Would love to have that pointed out in more detail if anyone can. cheers Darryl 7. Re: Process for contact form inquiry case managementLiam Dilley Jun 12, 2012 7:16 PM (in response to scratchmybrain) What I meant was the system has a system url email from basecamp. The odd client we have replies to an email, writes things and expects us to recieve it. We do not because it goes to the system used from email to send the email out and does not go to us at all or stored in the system.
https://forums.adobe.com/thread/993234
CC-MAIN-2016-44
refinedweb
988
77.27
JAX-RS 2.0 - Client API The Early Draft for JAX-RS 2.0 (JSR 339) has recently been submitted for publication. There are number of interested features in the new draft, and I'd like to start with the new Client API. The Client API is used to access Web resources. It provides a higher-level API than HttpURLConnection, and perhaps more importantly, integration with JAX-RS providers. Let us start by defining a simple ATM service using JAX-RS: @Path("/atm") public class AtmService { @GET @Path("/balance") @Produces("plain/text") public String balance(@QueryParam("card") String card, @QueryParam("pin") String pin) { ... } @Path("/withdrawal") @Consumes("plain/text") @Produces("application/json") public Money withdrawal(@QueryParam("card") String card, @QueryParam("pin") String pin, Double amount) { ... } } This web resource class defines a simple ATM service with two subresources to retrieve an account's balance and to withdraw money. Accessing these resources requires the client to provide a card number and a PIN. Naturally, the withdrawal subresource also requires the client to provide an amount. The actual implementation of the service isn't important for our purposes. The entry point to the Client API is the Client class. Using this class we can create one or more Target instances. From each Target instance we can create one or more requests. The fluency of the API somewhat blurs all these steps, but it's important to understand them in order to write better code. The following code snippet can be used to check an account's balance: Client client = ClientFactory.newClient(); String bal = client.target("") .queryParam("card", "111122223333") .queryParam("pin", "9876") .request("text/plain").get(String.class); In this example, the target is created from a URI and later extended by adding two query parameters. After that, a request is obtained and a resource's representation selected, namely "text/plain". Finally, the HTTP access method is specified (GET) as well as the Java type to be used for the response. Note that JAX-RS provides a built-in Message Body Reader (MBR) to convert a "text/plain" representation into a Java String, so no further configuration is required. The following code shows how to interact with the withdrawal subresource: Money mon = client.target("") .queryParam("card", "111122223333") .queryParam("pin", "9876") .request("application/json") .post(text("50.0"), Money.class); Note that the post() method has two parameters: the request entity and the Java type to be used for the response. The variant text() is a convenient way to specify an entity of type "text/plain" while at the same time providing a value for it, i.e. the string "50.0". The execution of this request will require the conversion of the Java string "50.0" into a "text/plain" representation, for which a built-in Message Body Writer or MBW is already provided by JAX-RS, as well as a MBR to convert a representation of type "application/json" into an instance of the Money class. Given that the Money class is not known to the JAX-RS runtime, the application must provide an MBR for it. This can be done by accessing a configuration object. Most of the Client API classes are configurable; let us show how to register this provider using our Client instance. Client client = ClientFactory.newClient(); client.configuration().register(MoneyMBR.class);<br type="_moz" /> As we shall see in later blog entries, registration of providers is also possible on targets and requests, and perhaps even more interestingly, configurations can be inherited. - Login or register to post comments - Printer-friendly version - spericas's blog - 3925 reads
http://weblogs.java.net/blog/spericas/archive/2011/10/14/jax-rs-20-client-api
crawl-003
refinedweb
595
55.24
Americans at Risk of Identity Theft as They File their Tax Returns (betanews.com) 77 Ian Barker, writing for BetaNews:. Re: (Score:2) Re: (Score:3) Re: (Score:2) Re: (Score:2) Nah, I am an undocumented tax payer. Re:Thankful to the Donald we don't have to file (Score:4, Insightful) Not paying your taxes makes you smart!* * Only applies to billionaires. Attempts to apply this to someone in the middle class may result in jail time. Re: (Score:2) ask your employer to "hire" you as a "contractor," At least in Canada, CRA has some pretty strict definitions of what a contractor is. If you're basically going back to full time employment for one employer, you are not a contractor and CRA will catch that if you're ever audited./p Re: (Score:1) So what are we to do? (Score:3) "And another 38 percent either store tax documents on their computer's hard drive or in the cloud, approaches that are susceptible to a variety of hacks." - really? So, other than a local storage ("hard drive") or remote storage ("cloud"), what other approaches are there to storing documents that are not susceptible to any hacks? Paper printouts? :) This article is brought to you by association of paper manufacturers. Encrypt your hard drive, choose good passwords for your cloud storage and don't share them with others. Your data is at most risk at your doctors office, btw (where they have all of your personal information, along with SSN and family records). I had 2 notifications of personal information theft from doctor's billing processor's offices in the last 2 years. Re: (Score:2) Re: (Score:2) stored on my primary computer's hard drive. In order to steal them, someone has to come to my house and steal my computer. All someone has to do is gain remote access to your computer and they have access to your documents. Could happen through a Javascript exploit. Yikes! No kidding. You'd think someone posting on a site like slashdot would be more familiar with technology than to assume that the only way to steal documents on a hard drive is to physically steal the computer... scare mongering getting old (Score:4, Insightful) All these individual security tactics are NOT where the problem lies. You can encrypt your drives, use TFA, and shred all the paper. But thieves steal the enitire DB at Intuit or irs.gov. American attitudes are properly aligned. We don't control the databases where most theft occurs. Re:scare mongering getting old (Score:5, Insightful) All these individual security tactics are NOT where the problem lies. ... But thieves steal the enitire DB at Intuit or irs.gov. You are correct that "individual security" is not the problem, but DBs are not the problem either. The real problem is the idiotic notion that SSNs can be both widely known and secret. I am required to provide my SSN to my employer, my bank, my doctor, my state government, etc. Yet mere knowledge of that number is supposed to authenticate my identity? That makes no sense. Re: (Score:2) That's one of my peeves. The SSN is fine as an ID. It is NOT fine as a password, and those people who treat it as such are idiots. Re: (Score:2) The SSN is fine as an ID.. Re: (Score:2). Citation needed. The SSA does not re-issue numbers. So far, it has issued 450 million out of about 1 billion numbers, but it hasn't issued any duplicates (although some people have been issued more than one). There were some news reports a while ago about a company that did analysis on databases they had access to and found that some numbers were associated with more than one name, but those were just examples of identity theft or clerical errors. Of course, the media immediately trumpeted "ZOMG other p Re: (Score:2) Citation needed. Citation [wikipedia.org] Citation [computerworld.com] Citation [nbcnews.com] Re: (Score:2). Re: (Score:2) NONE of those show that the SSA reissues SSNs Nobody said they did. I said they were NOT UNIQUE. They aren't. Re: (Score:2) Not sure what you want me to look at on the Wikipedia article was it this? However, there have been instances where multiple individuals have been inadvertently assigned the same Social Security number The reference actually only mentions a single instance of this happening, not multiple as the Wikipedia article says. Yes, that was a case of two people being assigned the same number. However, they also had the same name and same birthday, so your assertion that the federal government uses a combination of S Re:scare mongering getting old (Score:5, Informative) Another problem are credit companies who treat identity theft with a shrug and a "that's your problem." Someone obtained my name, SSN, DOB, and address. How, I'll never know. They opened a Capital One credit card in my name. The fact that the mother's maiden name was wrong on the form wasn't a red flag. Neither was the immediate address change to another state. Nor was "my wife" calling to request a $5,000 cash advance before the card was activated. When the card arrived at my house (a lucky quirk of them paying for rush delivery and THEN changing the address), I called CapitalOne. First, they insisted that it couldn't have been fraud, asking if my wife opened it without my knowledge. (She was next to me, freaking out about the situation. That'd be a no.) Then, they admitted that it might be fraud, closed the card out, but refused to give me more information. They literally told me "If we give you the address on the account and you go there and shoot them, we'd be liable." Apparently, they didn't think anything about liability if they opened an account under my name, ignoring a lot of red flags. They even stone-walled the police - telling them to call one phone number that was "manned" by an answering machine whose messages were never returned. Eventually, I gave up on trying to push the investigation forward and just froze my credit. For all I know, the thieves who stole my identity are still out there racking up debt on other people's credit. Re: (Score:2) I froze my credit records at the three big credit agencies a few years back. Just for anyone's info, you go to their sites and route around until you find out which stupid pet tricks they make you perform to do it. If I recall, two were relatively easy, one was a royal pain in the tookus to find out how. Each charges between $10-$15...back then, dunno what it is now. If you need credit, you can get them unlocked for a period of time before the lock goes back on. I think it varies between 30-45 days. And of c Re: (Score:2) Re: (Score:2) We froze both of our credit files after the identity theft. It's useful when stores try to pressure you to "save 5% now if you just sign up for our card." Nope. No can do. My credit's frozen due to identity theft. That shuts them up real quick. On the down side, though, we gave up on refinancing our mortgage a couple of years ago even though we could have saved money. It was too much of a headache to thaw our credit, get the mortgage quotes, and try to get everything signed before the freeze took effect aga Re:scare mongering getting old (Score:5, Interesting) Reality is "Identity Theft" is a purposeful lie produced by public relations and marketing agencies to push the burden of the crime from the banks to individuals. It is a lie. The reality is the fraud is not against the individual the fraud is against those who accept that false identity. Why the shift, so you the ignorant mug punters get stuck with the loss and the banks wander off laughing. The truth is, when you get hit by a false claim, you are entitled to seek the prosecution of those who attempted to make that false claim. By any reasoned justice those who made the false claim against you must now prove they were defrauded by another party, else be charged with fraud themselves. It should never ever be up to you to prove anything, you should just be able to forward a complaint of false fiscal claims against you as fraud to the authorities and let them deal with it. Of course the banks would end up with the bill, hence the scam of identity theft, where you the innocent party and now liable for the corrupt stupidity of the banks until you can prove your innocence, can you not see the criminal corruption in that. Re: (Score:2) Put the liability for improper identity verification on the financial institutions and watch the problem get fixed real fast. I know it's not done because of cost, but identity verification really should be done in-person. You verify their government issued documents, maybe confirm some biometrics and if someone is trying to commit fraud you have them right there for the police to apprehend. Re: (Score:2) Re: (Score:2)... [irs.gov] by signing up at the above (well in advance of the return due date, it's likely too late to ask for one for your 2016 return), it essentially functions as a password for your return I suspect (Score:3) they can't hack the paper forms I mail in. Re:I suspect (Score:4, Insightful) they can't hack the paper forms I mail in. They can as soon as the forms are scanned, and your info is inserted into the same DB as everyone else. Re: (Score:2) they can't hack the paper forms I mail in. Right. Because no one with access to your mail would ever possibly read it. Mathtime (Score:2) Re: (Score:2) 787000/330000000 = less than a 1:500 chance. Most households only file one return. Last year there were about 140M returns filed. So the chance is actually about 1:150 ... and those are only the confirmed cases. Re: (Score:2) IRS PIN (Score:2) Paper tax return (Score:2) Not just because it's much less likely to be hacked. I just want the IRS to feel some pain trying to read my chicken-scratch handwriting to make up for what I feel when handing them my money. IRS motto: We've got what it takes to take what you've got. Re: (Score:2) So you make it harder for yourself... in order to put more burden on an agency that's funded by YOUR tax dollars. WTF kind of immature logic is that? Re: (Score:2) So you make it harder for yourself It's not really harder to fill out the forms by hand. And its an issue of the vulnerability of electronic filing that I am concerned with. Somebody has to key in the figures, so it might as well be done by the IRS rather than me. What makes life easier for them also makes it easier for the scammers. We have to stop thinking of ourselves as being subservient to our bureaucratic overlords. Re: (Score:2) Not an issue for Greeks and major companies. True, I think the ancient Greeks will not be affected by this. However, lot's of Seagate employees (and employees of other major companies) will disagree with you. shared knowledge (Score:2) Re: (Score:1) Re: (Score:2) Many people in the UK don't do a tax return and their tax is exactly correct at the end of the year. This is accomplished in several ways: 1. Just like the USA, employers and other entities send data to HMRC. 2. Many allowances are limited to basic rate tax, so the amount of the allowance doesn't change based on income. 3. Interest and dividends are taxed at source. and probably the most significant difference: 4. Employers calculate tax to be deducted on a rolling basis (taking account of prior income and t Re: (Score:2) Then congress should quit trying to do their social engineering through the tax code and remove all those deductions, the only thing I can see there that would require any input from the taxpayer is "received money from a friend/relative" and I'll guarantee 99% of such transactions go unreported anyway. If congress wants to encourage having children, or home ownership, or having solar panels, or being a blind railroad worker, let them make a direct appropriation and send checks to the people who they decid old guy here (Score:2) I still file mine hardcopy in the mail since the 20th century. So no worries of internet hacking. Actually one concern is throughout the years there have been staff cuts at IRS, and probably more soon. A friend who has a accounting/taxes business says Fresno office used to have a couple auditors that were good to work with (yes, not all tax audits are perilous, occasionally they want to review certain returns). So maybe filing hardcopy might soon be a thing of the past as less competent people to deal with Fraudulent 1040 filed for me last year (Score:3) Re: (Score:2) I already filed, (Score:2) but if I hadn't, they'd have been welcome to pay what I owed. Re: (Score:2) but if I hadn't, they'd have been welcome to pay what I owed. I'm pretty sure the criminals filing false returns are also using false data so they can get a false refund. W2 Spearphishing (Score:2) I'm seeing a lot of W2 spearphishing. Note the return domain CORNPANY.com not COMPANY.com. just do the prez... (Score:2) Seriously? (Score:2) Another reason to scrap the income tax (Score:3) Everyone understands that taxation creates a disincentive for particular behavior, which is precisely why tobacco is taxed at such ridiculous levels. Why the hell do we tolerate a tax system which creates a disincentive for working and producing things? Eliminating the ridiculously complex, multi-thousand page income tax code also gets rid of the government's favorite and most convenient mechanism for handing out favors to wealthy special interests. It creates an incentive for businesses to invest in the U.S. & makes U.S. goods more competitive vs. imports
https://news.slashdot.org/story/17/02/22/209252/americans-at-risk-of-identity-theft-as-they-file-their-tax-returns
CC-MAIN-2017-26
refinedweb
2,446
71.24
Agenda See also: IRC log <> CI: too many different parameters (window size, screen resolution, font-size...) that are not controllable by the content creator (1.4.8) saz: I read it as having to insert a <br> after 80 characters <shadi> For the visual presentation of blocks of text, a mechanism is available to achieve the following: <shadi>. RESOLUTION: group agrees with CarlosI that 'width no more than 80 characters' is not sensible saz: e.g. in german (with very long words) magnification up to 200% will not wrap <shadi> saz: there are ways to use em ... note that the SC requires 'a mechanism', which can also be a style sheet switcher <shadi> Specifying line spacing of 1.5 in CSS (HTML, CSS, future link) jk: 1.4.8 4th bullet: line spacing vs. line-height (CSS) saz: 5th bullet: what is the base for 200%? <shadi> Providing options within the content to switch between layouts that use a variety of font sizes (future link) 1.4.9 saz: it's a business issue ci: it's a testability problem because there are no criteria for 'essential' ... no WCAG 2 docs shed light on this 'grey' area <shadi> <shadi> <shadi> pure decoration <shadi> serving only an aesthetic purpose, providing no information, and having no functionality <shadi> Note: Text is only purely decorative if the words can be rearranged or substituted without changing their purpose. <shadi> Example: The cover page of a dictionary has random words in very light text in the background. <shadi> jk: the more difficult thing is visual presentation of text being 'essential' ci: e.g. corporate design guidelines requiring a certain font for headlines RESOLUTION: group agrees with CarlosI's comment on 1.4.9 saz: ci, anything really pending things? ci: the most interesting things for our group may be the last two comments <shadi> <shadi> <shadi> <h3><a id="example" name="example"> <shadi> <h3 id="example"> RESOLUTION: in ERT documents don't use div elements that don't have functions ... keep a elements with name and id attribute for link targets <shadi> proposed namespace: <shadi> proposed namespace: proposed namespace: EARL namespace is <shadi> saz: benefit for using years: versioning ACTION saz and ci to send comments about Content-in-RDF to mailing list <scribe> ACTION: saz and ci to send comments about Content-in-RDF to mailing list [recorded in]
http://www.w3.org/2008/02/13-er-minutes
CC-MAIN-2014-49
refinedweb
395
50.67
> Hello, everybody! I'm quite new around this place, and kinda newbie, so I'd appreciate if you are patient with me :). I'm working on a project where the only goal is to tap an object with your finger. Yep, that's it. It's gonna be for android and there's gonna be a counter whenever you tap on it. That's already been done, the problem comes when I think about how to make the game save those numbers automatically when the user quits the app and how to make the game auto-load whenever he enters again. I've been searching through the forum, but I couldn't get anything clear. I've seen something about "Serialization" and "OnApplicationQuit" but, honestly, I don't know how to make it work or where to attach them to. I'm gonna paste my code here: using System.Collections; using System.Collections.Generic; using UnityEngine; public class Click : MonoBehaviour { public UnityEngine.UI.Text oroPuntos; public int gold = 0; public int goldClick = 1; public int counter = 0; void Update () { goldPoints.text = "gold: " + gold; } public void Clicked(){ gold += goldClick; //gold = gold + goldCick; counter ++; if (counter == 10) { gold = gold + 100; } if (counter == 11) { goldClick = goldClick + 1; } } } I made a Canvas, so everything is inside of it. It's a 2D game, btw. Thank you!, N. Answer by herDev · Mar 21, 2017 at 10:29 AM Hi! You can use PlayerPrefs to save and load data: In your case, you can use SetInt to save the data at the approriate time. For example: PlayerPrefs.SetInt("goldValue", gold); GetInt can be called when the app is started, to retrieve the saved value: gold = PlayerPrefs.GetInt("goldValue")); Just one note, PlayerPrefs literally creates a text file on the device that users can navigate to through a file browser and modify. If you need the value to be secure, there are some of posts in Unity Forums with some ideas. 335 People are following this question. Save instances and recognize them on load 0 Answers Can someone tell me what is wrong with this script? 1 Answer What is wrong with my script? 0 Answers Save Data Android C# 0 Answers application.quit not quitting game? 2 Answers
https://answers.unity.com/questions/1329138/auto-save-and-auto-load-on-quit-app.html
CC-MAIN-2019-22
refinedweb
372
73.78
How To Use Memcached With Ruby on Rails on Ubuntu 12.04 LTS Introduction Memcached is a very fast in-memory object caching system that can make Rails run much faster with very few changes.Prerequisites: This tutorial assumes you have already installed Ruby on Rails and Memcached. If not, the tutorials are linked below: - How to Install Ruby on Rails on Ubuntu 12.04 LTS (Precise Pangolin) with RVM | DigitalOcean - How to Install and Use Memcache on Ubuntu 12.04 | DigitalOcean It also assumes that you have your Rails application up and running and ready to optimize using Memcached. Install the Dalli Gem The first thing we will have to do is install Mike Perham's Dalli Gem: gem install dalli If you use Bundler, then add gem 'dalli' to your Gemfile and run bundle install. This will be our super fast and feature packed way of interacting with Memcached. Configure Rails The first step to configuring Rails to use memcached is to edit your config/environments/production.rb and add this line to tell Rails to use Dalli: config.cache_store = :dalli_store Next, we will tell ActionController to perform caching. Add this line to the same file: config.action_controller.perform_caching = true Now, restart your Rails application as you normally would. Change Your Rails Application To take advantage of the changes we've just made, the Rails application will need to be updated. There are two major ways to take advantage of the speed up memcached will give you. Add Cache Control Headers The easiest way to take advantage of memcached is to add a Cache-Control header to one of your actions. This will let Rack::Cache store the result of that action in memcached for you. If you had the following action in app/controllers/slow_controller.rb: def slow_action sleep 15 # todo - print something here end We can add the following line to tell Rack::Cache to store the result for five minutes: def slow_action expires_in 5.minutes sleep 15 # todo - print something here end Now, when you execute this action the second time, you'll see that it's significantly faster. Rails only has to execute it once every five minutes to update Rack::Cache. Please note that this will set the Cache-Control header to public. If you have certain actions that only one user should see, use expires_in 5.minutes, :public => false. You will also have to determine what the appropriate time is to cache your responses, this varies from application to application. If you would like to learn more about HTTP Caching, check out Mark Nottingham's Caching Tutorial for Web Authors and Webmasters. Store Objects in Memcached If you have a very expensive operation or object that you must create each time, you can store and retrieve it in memcached. Let's say your action looks like this: def slow_action slow_object = create_slow_object end We can store the result in memcached by changing the action like this: def slow_action slow_object = Rails.cache.fetch(:slow_object) do create_slow_object end end Rails will ask memcached for the object with a key of 'slow_object'; if it doesn't find that object, it will execute the block given and write the object back into it. Fragment Caching Fragment caching is a Rails feature that lets you choose which parts of your application are the most dynamic and need to be optimized. You can easily cache any part of a view surrounding it in a cache block: <% # app/views/managers/index.html.erb %> <% cache manager do %> Manager's Direct Reports: <%= render manager.employees %> <% end %> <% # app/views/employees/_employee.html.erb %> <% cache employee do %> Employee Name: <%= employee.name %> <%= render employee.incomplete_tasks %> <% end %> <% # app/views/tasks/_incomplete_tasks.html.erb %> <% cache task do %> Task: <%= task.title %> Due Date: <%= task.due_date %> <% end %> The above technique is called Russian Doll caching alluding to the traditional Russian nesting dolls. Rails will then cache these fragments to memcached and since we added the model into the cache statement this cache object's key will change when the object changes. The problem this creates though is when a task gets updated: @todo.completed! @todo.save! Since we are nesting cache objects inside of cache objects, Rails won't know to expire the cache fragments that rely on this model. This is where the ActiveRecord touch keyword comes in handy: class Employee < ActiveRecord::Base belongs_to :manager, touch: true end class Todo < ActiveRecord::Base belongs_to :employee, touch: true end Now when a Todo model is updated, it will expire its cache fragments plus notify the Employee model that it should update its fragments too. Then the Employee fragment will notify the Manager model and after this, the cache expiration process is complete. There is one additional problem that Russian Doll caching creates for us. When deploying a new application, Rails doesn't know when to check that a view template has changed. If we update our task listing view partial: <% # app/views/tasks/_incomplete_tasks.html.erb %> <% cache task do %> Task: <%= task.title %> Due Date: <%= task.due_date %> <p><%= task.notes %></p> <% end %> Rails won't expire the cache fragments that use view partial. Before you would have to add version numbers to your cache statements but now there is a gem called cache_digests that automatically adds in an MD5 hash of the template file to the cache key. If you update the partial and restart your application, the cache key will no longer match since the MD5 of the view template file has changed and Rails will render that template again. It also handles the dependencies between template files so, in the above example, it will expire all our cache objects up the dependency chain if the _incomplete_tasks.html.erb is updated. This feature is automatically included in Rails version 4.0. To use this gem in your Rails 3 project, type the following command: gem install cache_digests Or if you use Bundler, add this line to your Gemfile: gem 'cache_digests' Advanced Rails and Memcached Setup The Dalli Ruby Gem is very powerful and takes care of spreading keys across a cluster of memcached servers, which distributes the load and increases your memcached capacity. If you have multiple web servers in your web tier, you can install memcached on each of those servers and add them all to your config/environments/production.rb: config.cache_store = :dalli_store, 'web1.example.com', 'web2.example.com', 'web3.example.com' This will use consistent hashing to spread the keys across the available memcached servers. 7 Comments
https://www.digitalocean.com/community/tutorials/how-to-use-memcached-with-ruby-on-rails-on-ubuntu-12-04-lts
CC-MAIN-2017-22
refinedweb
1,080
62.98
User Tag List Results 1 to 1 of 1 - Join Date - Dec 2006 - 135 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) pull a querystring value into xsl? I am trying to do some xml searches and results paging purely using client-side code. To accomplish this, I would like to capture url querystring values to use them in the xsl logic. Searching around on the web, I uncovered a few examples of this being done. But they all used some sort of namespace that I don't understand. For example, Code: <xsl:param Code: <xsl:variable Can anyone recommend an effective way to capture a querystring value in my xsl logic or to pass values into xsl from javascript (where I know I can grab querystring values). And any insight into these namespace functions? Thanks! Bookmarks
http://www.sitepoint.com/forums/showthread.php?619460-pull-a-querystring-value-into-xsl&p=4274070&viewfull=1
CC-MAIN-2016-26
refinedweb
137
69.41
Rename S3 assets after paperclip hashing I just found out about paperclip hashing to keep your styles as a ‘not easy to guess’ url so something like this: products/1/original/image.jpg becomes something like this products/1/sadGFASgoaw4eaw40q2asfq23/image.jpg this way is harder for people to guess your styles, which might be private some how. This is really simple to set up all you need to do is set up a hash_secret default option for paperclip and add :hash in the asset path in an initializer, like this: Paperclip::Attachment.default_options.update({ :path => ":class/:attachment/:hash/:style.:extension", :hash_secret => ENV[RANDOM_SECRET] }) And voila, now you have complex to guess urls for your assets, but, when you go and check the site you’ll notice that all the images point to non existing files so we need to copy/move/rename all the old files to the new S3 url. I spent some time finding how to generate the :hash param in the url so I can copy files from the old url into the new one, and I found a method in paperclip attachment model called hash_key, this method receives the syle name and does a Digest interpolating your unique hash_secret and the style name. So now that we know how to generate the proper url to copy to, we need to actually copy the files, so I came up with this small rake task that uses amazon AWS SDK to copy all files to the new location. namespace :aws do namespace :rename do task media: :environment do Model.all.each do |m| [:original, :medium, :thumb].each do |style| s3 = AWS::S3.new key = "model/#{m.id}/#{style}/#{m.attachment_file_name}" object = s3.buckets[:bucket_name].objects[key] next unless object.exists? hash = m.attachment.hash_key style copy_key = "m/#{m.id}/#{hash}/#{m.attachment_file_name}" puts "Copying to #{copy_key}" object.copy_to(copy_key, acl: :public_read) end end end end end Please be carefull with the key and copy_key variables I’m using in this snippet since they’re different in each project depending on your path configuration, also, notice how im using acl: :public_read in the copy_to method since those styles being copied are meant for public access and paperclip by default makes them public but copying them using AWS doesn’t so make sure you want those assets either public or private and set the necessary permissions to them. Note: You can also use move_to method instead of copy_to, but I wouldn’t recommend it because something could go wrong and this way you’ll have some sort of backup. - Thorsten Claus - softr8
http://blog.magmalabs.io/2015/11/25/rename-s3-assets-after-paperclip-hashing.html
CC-MAIN-2021-04
refinedweb
433
59.33
This was really quite nice. It didn't have a Python version, but the clarity of the exposition makes the Python easy to write. A few months back, I had this mystery conversation:. While this is not going to produce identical results to the code shown in the blog post, it seems to fit the requirements. from hashlib import sha256 import os class Authentication: iterations= 1000 def __init__( self, username, password ): """Works with bytes. Not Unicode strings.""" self.username= username self.salt= os.urandom(24) self.hash= self._iter_hash( self.iterations, self.salt, username, password ) @staticmethod def _iter_hash( iterations, salt, username, password ): seed= salt+b":"+username+b":"+password for i in range(iterations): seed= sha256( seed ).digest() return seed def __eq__( self, other ): return self.username == other.username and self.hash == other.hash def __hash__( self, other ): return hash(self.hash) def __repr__( self ): salt_x= "".join( "{0:x}".format(b) for b in self.salt ) hash_x= "".join( "{0:x}".format(b) for b in self.hash ) return "{username} {iterations:d}:{salt}:{hash}".format( username=self.username, iterations=self.iterations, salt=salt_x, hash=hash_x) def match( self, password ): test= self._iter_hash( self.iterations, self.salt, self.username, password ) return self.hash == test # Constant Time is Best It may be helpful to use __slots__ with this to reduce the storage and make the object less mutable. Perhaps I didn't google well enough to find a clear explanation that also included Python code samples.
http://slott-softwarearchitect.blogspot.com/2013/12/secure-salted-password-hashing.html
CC-MAIN-2018-26
refinedweb
242
52.46
really like these: * Qt splitted in GUI and non-GUI part * Performance improvements * Better integration with KDevelop * Arthur (paint engine): perhaps a Cairo (FDO stuff, together with Keith's XServer) backend * Scribe (text render engine) and ATK accessibility bridge: now we're on par with GTK on this stuff :) > Scribe (text render engine) and ATK accessibility bridge: now we're on par with GTK on this stuff :) Erm, Qt is already on par with Pango.. The only thing missing for a while was proper Indic support which came out in Qt 3.2. Qt 3.0 had bidirectional-text support months before Gtk 2.0 came out (with Pango), and Qt 2.0 came out with full fledged Unicode support, many years before Gtk 2.0 and Pango had that. > Erm, Qt is already on par with Pango.. Actually not quite. Pango deals with fonts better on linux, especially a lot of those in the cjk range. It also has a text layouting api that qt desparately needs, to handle complex line breaking and paragraphing situations. Also, support for input methods in qt is lacking, as qt3 only really supports XIM. So on the i18n front, gtk is still ahead overall, i would have to say "Pango deals with fonts better on linux, especially a lot of those in the cjk range. " Perhaps, I don't have much experience with cjk. "It also has a text layouting api that qt desparately needs, to handle complex line breaking and paragraphing situations." It already has a complex text layouting API since Qt 2.0. It was mostly rewritten in 3.0 to support bidirectional text. "Also, support for input methods in qt is lacking, as qt3 only really supports XIM" Not true, depending on the language, there are other input methods available as Qt plugins. What thing that is lacking is IIIM support however. > It already has a complex text layouting API since Qt 2.0. It was mostly rewritten in 3.0 to support bidirectional text. Is it a public API? Last time I checked, things like QTextEdit could do it right, but when it came two writing your own widgets that needed layout, there wasn't much of a way to do it? Also, which input method plugins were you referring to? THe only thing I know about is the 'immodule for qt' effort, but it hasn't been incorporated into qt and as far as i know there hasn't been much mention of it outside of the pages and mailing lists that are directly associated with it. > Erm, Qt is already on par with Pango. I can't say since I haven't used Pango lately. However, concern about these advanced features is misplaced because the current release of Qt does not handle fonts well -- basic font handling. It fails to find a significant number of my installed fonts which even WordPerfect-8.1 finds correctly. Test case: I have several weights and widths of Helvetica (Type1) which are simply not listed. What is worse is that this (to some extent) is a regression. I am certain that I used to use Helvetica Narrow with no problem. Yes, I did send them a bug report and the advised me that my analysis of the problem was correct (TrueType and Type1 handle this differently) and that they would work on it. With other fonts, Qt does find the font, but it then uses the wrong font. Considering that Qt is now in version 3.x.y this causes great concern -- in general problems like this are usually due to a naive design. I hope that at least some of this gets fixed with 3.3.2 -- if I had paid full price for these fonts, I would not be very pleased that I can't use them in KDE. -- JRT >Better integration with KDevelop If this is well done we can have some kind of delphi/kylix, but open source and more native! WOW! I love the new foreach. How is this done? moc? foreach is part of STL. I think the new QT constructs inherit from the STL containers (or at least implement the container interfaces) so we can use STL algorithms for the QT containers. > foreach is part of STL. I think the new QT constructs inherit from the STL > containers (or at least implement the container interfaces) so we can use STL > algorithms for the QT containers. No, you can already use STL algorithms on Qt containers since Qt 3.0 made them STL compatible: QStringList list; std::for_each( list.begin(), list.end(), SomeFunctor() ); The new foreach is most probably a macro (and it will probably expand to suboptimal code (say "const vs. non-const iterators"). If it was more than that, you wouldn't need the redundant "QString s" declaration in it ("list" already has that information), so you could write: foreach( s, list ) len += s.length(); The problem I see is that "foreach" - as an identifier - will surely clash with a possible future C++ reserved keyword of the same name. But then, the C++ people will more likely standardize the Boost Lambda library than introducing syntactic sugar in the language... Actually, with our version of foreach you can write both. foreach(s, list) works just as well as foreach(QString s, list). It is a combination of template magic and a macro. The macro is of course called Q_FOREACH, but you can set a flag that "pollutes" the namespace with the foreach keyword if you want to. The code it expands to isn't exactly trivial, especially not on non-gcc compilers like MSVC6. The generated assembly, however, is pretty optimal. While it is slightly bigger than a handwritten loop, there is basically no speed difference. Would I recommend the construct to someone who knows and loves the STL? No. Would I recommend it to somebody who wants to write readable, fast-enough, and good-enough code and possibly has a Java background? Yes. Do I use it myself? In application code, yes. "Would I recommend the construct to someone who knows and loves the STL? No. Would I recommend it to somebody who wants to write readable, fast-enough, and good-enough code and possibly has a Java background? Yes. Do I use it myself? In application code, yes." That would be me then. Sounds great! I am quite a bit curious: Did Trolltech implement this via the moc preprocessor? If not, I can hardly imagine a way how this can be done with pure C++ and general enough so that user defined container classes will work with this. I came up so far with the snippet below, but this has its limitations as can be seen. ---------------------------------------------------------------- #define foreach(decl, container) for(std::pair::type, sizeof(deduce_helper(container))>\ ::type::iterator> iter = std::make_pair(0, container.begin()); iter.second != container.end(); (iter.first = 0, ++iter.second) ) \ for(decl = *iter.second; iter.first != 1; ++iter.first) struct std_vector_tag { char d; }; struct std_list_tag { char d[2]; }; struct std_string_tag { char d[4]; }; struct std_map_tag { char d[8]; }; template void deduce_helper(T) { } template std_vector_tag deduce_helper(std::vector); template std_list_tag deduce_helper(std::list); std_string_tag deduce_helper(std::string); template std_map_tag deduce_helper(std::map); template struct deduce_from_tag; template struct deduce_from_tag { typedef std::vector type; }; template struct deduce_from_tag { typedef std::list type; }; template struct deduce_from_tag { typedef std::string type; }; template struct deduce_from_tag { typedef std::map type; }; template struct deduce_param1; template struct deduce_param1 { typedef T type; }; -------------------------------------------------------------------- Eric Niebler submitted an implementation of a FOREACH macro to boost that was really clever. It has since been yanked. Here is the readme: BOOST_FOREACH by Eric Niebler The BOOST_FOREACH macro is a simple, intuitive and typesafe way to iterate over a collection. It is inspired by similar looping constructs in other languages, particularly C#. It is intended for inexperienced programmers who do not know much about iterators, predicates and half-open sequences. It is also useful for anybody who wants to type fewer characters when writing loops over sequences. Its salient features are: - It works over STL containers, arrays and null-terminated strings. - Your loop variable can be a value or a reference. If it is a reference, then any changes you make to it get written through to the underlying sequence. - It generates near-optimal code. That is, it is almost equivalent to the hand-coded equivalent. - It behaves just like a for loop. That is, you can break, continue, goto or return out of the loop body. The syntax is as follows: std::list int_list; ... BOOST_FOREACH( int &i, int_list ) { i += 10; } Note that the loop variable, i, is a reference to the ints within int_list. You can also declare your loop variable before the BOOST_FOREACH loop: int i; BOOST_FOREACH( i, int_list ) { std::cout << i << std::endl; if ( -1 == i ) break; // this breaks out of the BOOST_FOREACH loop } Acknowledgements Many of the ideas for BOOST_FOREACH are described in the Nov 2003 C/C++ Users Journal article by myself and Anson Tsao. That article describes how to implement a FOR_EACH macro that works with .NET collections. BOOST_FOREACH is a reimplementation of the original FOR_EACH macro that works with native type. It also corrects a number of shortcomings of that implementation. You can find the code to BOOST_FOREACH along with a PowerPoint presentation about it at. Enjoy! Eric Niebler Boost Consulting God, I love the Internet! You mention somebody's name in a thread, and he comes and replies to the thread with complete details. Thanks Eric. "Driven by consumer electronic devices such as mobile phones and PDAs, desktop applications are moving away from standard widgets and styles towards more customized user interfaces" While I'm glad Qt 4 will have a better style engine, and that it will better support miniscule low-bit displays on phones and devices, isn't this another word for "skins"? And aren't the current Qt style preferable to bitmapped skins? I don't want my KDE desktop looking like an LCD display. Nor do I want it looking like something Nullsoft drew. > And aren't the current Qt style preferable to bitmapped skins? I'm not sure if the article actually meant that. Anyway even right now you can use bitmaps to render styles, and that's what actually some styles do right now. Discover which of those ;) > I don't want my KDE desktop looking like an LCD display heh, discover which of the current styles use images ;)) Does any look an LCD? :)) I know what you mean, but I don't think you should worry, new Qt's drawing methods and styles will just improve. I wouldn't worry that much :)) this looks very cool! Couldn't agree more. That shiznit is off the hook! OK. I love the look of the Qt iterators. One big thing I dislike about C++ is the STL. It is soooo ugly. C++ is an inherently ugly language, but at least Trolltech are doing their best to make it useable. Just compare: foreach (QString s, list) sum += s.length(); to QList::const_iterator i; for (i = list.begin(); i != list.end(); ++i) sum += (*i).size(); !!! Neither is as nice, IMO, as: list.each {|s| sum += s.length } of Ruby, but the Qt way makes things much more bearable in C++. Once you remove the template (and casting) syntax, C++ isn't ugly. Quite the opposite. It makes a readble and easy-to-write language. But only if you stay away from that awful template (and casting) syntax. I must disagree. I find C++ to be ugly. It's a bolt-on to C, with a bunch of other bolt-on features. It's too hacky for my tastes. For example, why not make a sane syntax for abstract classes, such as abstract class Foo; Instead, you can choose to set a method equal to zero (hello, what? does this make any sense at all?) or make the constructor private (makes more sense, but still not as clear, to me, as an "abstract" keyword). I'm not saying that C++ is the world's crappiest language, but it's far, far, far from elegant. ESPECALLY, as you point out, when you add templates. Languages, such as Ruby, that were designed from day one with OO in mind are much cleaner to work with. Now all we need is a Ruby compiler.. :( But there are more pattern than OO. I like python more, but I believe its a question of taste. IMHO language should be designed to be easily readalbe. Reading is IMHO much more important than writing. The template mechanism is one of THE reasons why C++ is THE language especially for high-complex numeric problems since it provide the capability to write complex but very performant programs. There is no substitution for this in any other language. Beside this, I won't miss any of the other features C++ provides. See the Java-guys: they try to integrate all these C++ features the'd called "depreached" like enums and other. IMHO Java has more structural disadvantages as C++. One of the biggest disadvantage is thatyou have to call new every time you want to instance an object. Name one "high-complex numeric" program in KDE? I agree with the other poster. Templates in C++ are an ugly preprocessor hack. It is a bit like operator overloading, the arguments for why operator overloading in C++ is so good always involve either complex numbers or matrix math... -- Simon and yet for more language examples: Ah, perl... map { $sum += $#$_ } @list; Who couldn't understand that ;-) Perl - $Just @when->$you ${thought} s/yn/tax/ &couldn\'t %get $worse; Just to avoid language bigotry: this is a real Python line of code I wrote today (and no, I can´t find a simpler way to do it, and yes, it does something useful): return apply(ObjectInstances[method].__class__.__dict__[params[0 ]],(ObjectInstances[method],)+params[1:]) Notice how it´s ugly in a completely different way than perl? ;-) Of course when I saw what I had to write, I was shocked. While a perl programmer seing this level of uglyness probably doesn´t even notice (just to bring back language bigotry ;-) It's a shame that everyone pontificating about perl seems to know so little about it: map { $sum += $#$_ } @list; Try this instead, and show me how this is more obfuscated than any other language: $sum += length for @list; Oh of course - the $ to indicate a scalar, and @ to indicate an array - that's not how C/C++/python does it so it must be rubbish. Hey, is so much cooler than perl. There are plenty of things wrong with perl, as there are with all other languages - use each for its strengths (and learn how to code idiomatically). Rant over! ...and even Ruby isn't as elegant as the equivalent Perl: $sum += length($_) foreach @list but that's just me :) Nothing beats python :-) sum=0 for string in li: sum=sum+len(string) I'm 15 years old and one of these Delphi/Kylix/FreePascal guys :-) sum:= 0; for I:=1 to MyStringList.Count do Sum:= Sum+Length(MyStringList[I]); But now I'm also learning C#. Andy But this is technically not the same as the other examples. It would be trivial to implement this in C++ via a macro. The reason why your example is different from the others is that it will only work with containers which have an subscript operator. It will not work with containers which can only be forward traversed. With Python, you can also do it in a more functional programming way: reduce(lambda x,y:x+y,[len(s) for s in li]) The list comprehension '[len(s) for s in li]' returns the list of the lengths of the strings in li and the 'reduce' operator computes the sum: [1,2,3,4] -> (((1+2)+3)+4) ah, but the most Pythonic way would be: sum([len(l) for l in list]) which is a) the nicest notation of everything I've seen here, and b) is the thing that pops right into your head when thinking about the problem.. right? Yeah, but only if you know sum() accepts lists :-) Is: sum([len(l) for l in list]) Really easier to read than the *idiomatic* perl equivalent: There are a lot of muppets posting crap perl code on this site ATM - perhaps they should stick with a language they actually know. BTW I've got no problem with Python, Ruby, Tcl, Perl, ... - folks should use whatever they're happy with! >Is: >sum([len(l) for l in list]) >Really easier to read than the *idiomatic* perl equivalent: >$sum += length for @list; not if you are already a perl programmer. yeah, I do not really like the way it reads. what are these extra characters, '$', '@' and ';'? I think I would prefer 'sum += length for list', that really is easier to read. you know that working memory is limited, and having all sorts of redundant characters on a line (especially one longer than this) makes it harder to read. then, the '+=' turns me off a bit. it just seems awkward, but also for beginners and for e.g. presenting code to a math professor it's not very readable. in Python you tend to 'transform' datastructures, so you don't need awkward constructions like this. I admit the Python expression isn't insanely obvious at first, but at least once you get it it reads easily, and it's not awkward at all. it just says what you want to do, almost literally: 'I want the sum of the lengths of the things in this list.' what does the perl version say? 'I want to += the length for this list, and it's not clear what $sum is, initially.' and how flexible is this perl construct? can I add for instance the numbers of two lists? in Python this would be [a+b for (a,b) in zip(l1,l2)].. >what are these extra characters, '$', '@', ';' But you have to know a little of any language before using it properly don't you: $= scalar, @ = array, % = hash etc. Most of the other punctuation marks rarely have to be used or have English equivalents. >but at least once you get it it reads easily, and it's not awkward at all That's the whole point isn't it - once you get most (though not all :) languages they read easily! It's sad that much of the code posted by people to show how bad perl is often isn't idiomatic perl - it wouldn't take me long to write some hideous python, ruby, tcl, Java, C# etc code because I'd only have a relatively superficial knowledge of them. > 'I want the sum of the lengths of the things in this list.' what does the perl version say? You're being silly now - it says "sum the length of each item in this list". There's really not much difference here - they're both clearer than many other languages! > it's not clear what $sum is, initially Eh? >in Python this would be [a+b for (a,b) in zip(l1,l2)].. Yes, there's no doubt that's a nicer way than a perl equivalent - that's why zip is being introduced in p6. There's some great stuff in python, and p6 will certainly show some influence from python, but the fact remains I find it amazing that people are quick to judge other languages when they clearly have little knowledge of them. I'm not implying you, rather the original posters of the perl examples. Double posting - ah well never mind! As I didnt provide code: [a+b for (a,b) in zip(l1,l2)]. *Could* go to: $sum = $a[$_] + $b[$_] for 0 .. $#a; Not as nice - roll on p6! I respectfully disagree with your statement, that most languages have the same readability, once you are used to using them. by readability I mean a lot of things, like: is a language orthogonal, so that different programmers can quickly understand each others code? is it compact yet still clear, so that you can see what is happening from a high level? does it do away with low-level details irrelevant to the essence of what you are doing? can you re-read your code again after having worked on something else for six months? is the code accessible to beginner (or new at the project) programmers, so they can easily add or change things? I think these, among other things of course, all contribute to the inherent 'readability' of a language. not to make this into a python-perl war, looking at it this way, I think it is safe to say that Java is more readable than C. I also think Python is more readable than Java, because you can do a lot more in much less lines, while still being perfectly clear as to what is happening. I think the addition of all sorts of more or less redundant characters in Perl (I forgot ';', '{' and '}' :-)) makes it a little less inherently readable. but I admit I was probably overreacting a little.. Perl and Python are both great languages, and they are a lot more alike than that they differ.. Code like that makes me glad I chose to migrate from Perl to Ruby, rather than to Python. I don't know Ruby, but here is the corresponding code fragment someone gave earlier: can you tell me why you find this more readable than, say sum([len(s) for s in list]) also, what is the value of sum initially? and, can you use this list.each.. construction as an expression itself? "also, what is the value of sum initially? and, can you use this list.each.. construction as an expression itself?" You need to assign a 0 to sum. Array.each returns the array so you can nest calls - all ruby expressions return a value even 'if' statements irb(main):008:0> list = ["a", "bb", "ccc"] => ["a", "bb", "ccc"] irb(main):009:0> sum = 0 => 0 irb(main):010:0> list.each {|s| sum += s.length } => ["a", "bb", "ccc"] irb(main):011:0> puts "sum: #{sum}" sum: 6 => nil Although doesn't look any more readable than the python to me - they both look just fine! But ruby blocks and iterators are certainly very nice - I don't know if python works the same. Does it allow you to yield multiple values from a block? In ruby the little expression above is actually a sort of co-routine where the block and the Array yield control to each other for each iteration. so printing out the sum takes three lines..? yes, Python also has multiple return values (I do not understand how a ruby block works?), a general iteration protocol, and yield statements that allow functions to temporarily return and later resume what they were doing.. I don't thnk there are big semantic differences between perl, ruby and python. it's mainly the syntax.. I think yield is a bit easier conceptually to understand if it's viewed as an invocation of a closure and not a co-routine transfer, since IIRC, the block can just return, and not transfer back, no? (Although the co-routine definition certainly matches the history of the yield keyword), particularly considering the roughly interchangeability with Proc/Lambda A block can either return or yield. So here is the same thing in ruby 'functional style'. You use the collect method to map a function onto each element of the Array: irb(main):018:0> list = ["a", "bb", "ccc"] => ["a", "bb", "ccc"] irb(main):019:0> sum = 0 => 0 irb(main):020:0> list.collect! {|s| sum += s.length } => [1, 3, 6] irb(main):021:0> puts sum 6 All the other examples have been in this functional style. The iterator/yield approach is more powerful especially once you start to nest them. It's compositional and perfectly orthogonal; it involves two concepts (blocks and methods) which are found in every single program all over the place. It's just passing a code fragment to the .each method of the list, to call on every element. It does not need a special syntax to construct a new list of values by iterating over an another list, which I am guessing the Python example is doing. Also, as a matter of my own personal preference, the Ruby example follows a hierarchical structure of control flow, where outer control statements are on the outside in the code (yes, I never uses the postfix forms of if or unless in Ruby code)
http://dot.kde.org/comment/13829
CC-MAIN-2014-10
refinedweb
4,121
72.16
header for serial over USB modules More... #include "mcu_periph/usb_serial.h" Go to the source code of this file. header for serial over USB modules Definition in file usb_serial_stm32.h. Call VCOM_poll() from module event function. Definition at line 126 of file usb_serial_stm32_example1.c. References cmd_avail, cmd_execute(), ReadUsbBuffer(), usb_serial_parse_packet(), VCOM_check_available(), VCOM_event(), and VCOM_getchar(). Init module, call VCOM_init() from here. Definition at line 44 of file usb_serial_stm32_example1.c. References cmd_avail, cmd_idx, run, and VCOM_init(). Periodic function in case you needed to send data periodically like telemetry Note that the data are sent once the buffer is full, not immediately. Definition at line 55 of file usb_serial_stm32_example2.c. References big_buffer, run, and VCOM_putchar(). Parse data from buffer Note that the function receives int, not char Because we want to be able to catch -1 in case no more data were available. Definition at line 58 of file usb_serial_stm32_example1.c. References cmd_avail, cmd_buf, cmd_idx, prompt, run, VCOM_putchar(), and VCOM_send_message(). Referenced by event_usb_serial(), and ReadUsbBuffer().
http://docs.paparazziuav.org/latest/usb__serial__stm32_8h.html
CC-MAIN-2020-10
refinedweb
161
53.27
FinallyPosted Thursday, 29 September, 2011 - 18:56 by ChaozCoder I just found out about this great library (OpenTK of course). I wanted to create a little 3d framework as a basis for some small game ideas i have. I started out wanting to using Managed DirectX, then found out that it is no more supported for C#, i tried the CSGL wrapper, coded a little bit with it, then found out that CSGL was way outdated, and hos afaik no support for VertexBuffers. Somewhere in the middle of all this back and forth I thought about using XNA, then realized that it is very restricted, i don't like that microsoft does not support a C#-wrapper for DirectX. Finally I recoded my CSGL functions to OpenTK (very easy btw.). Now i have a little engine with a mesh class, vertex-normal calculation, a little float-height-map mesher and that's it. Next will be working on things like classes for camera and materials, including textures (TextureManager Class). Some things i wanna do using OpenTK: - Mesh-generation algorithms - Procedural mesh generation - Procedural texture generation - CSG Operations with meshes - and some more... For now i'm gonna browse on opentk.com a little and maybe find some more interesting things. Finally, I wanna thank everyone who made this for free ! - ChaozCoder's blog - Login or register to post comments Re: Finally Welcome to OpenTK, it's good to know that there's another OpenTK 3D framework in the works. I am watching OpenTK projects out there (as far as I can) so I will be very interested to see yours when is released. Cheers! :D Re: Finally If you are interested in mesh stuff, you could have a look at RenderStack and see if there is something you like. Peek inside Geometry namespace. I've got Catmull-Clark and Sqrt(3) subdivision algorithms implemented there for example. I have not updated the latest version since June, I've added a few things since then, fixed a few bugs in some of the shapes and added noise as you can seen from the preview screenshot: Re: Finally Thank you, i'm gonna take a look at it.
http://www.opentk.com/node/2717
CC-MAIN-2016-22
refinedweb
366
59.13
Optionally calibrate emojis You can’t pass an image of the emoji to the Face API to get its emotion because it's not a human face. So for each emoji, you need an emotive coordinate. We have used a very simple method of generating emotive coordinates in the module. This method - Takes one real face with an expression approximating that of the emoji. We call this the proxy image. - Runs it through the Azure Face API - Uses the result of that operation as the emotive coordinates for the emoji In this unit, we demonstrate how you can substitute your own proxy images, and re-calibrate the emoji emotive coordinates. As an extension, you could also devise more a more rigorous method of performing this mapping, by training with many sets of images, for example. Proxy Images You can see the list of proxy images for each emoji in the shared/proxy-images folder in the sample code. These proxy images were passed to the Face API to generate emotive points, which were then associated with an emoji image. You can see the results of the supplied set of proxy images in the MOJIS array in the shared/models/mojis.ts file in the source code. Create your own proxy images for emojis Note You can use your own images if you prefer. It can be fun to mojify yourself or your teammates at work in Slack! If you plan to use your own images, then take a picture of yourself mimicking each emoji in the shared/proxy-images folder and replace the original set of images. Create an Azure function to calibrate your proxy images Just as you did for the MojifyImage and the RespondToSlackCommand function, create another function called Calibrate. Replace the index.js file with a file called index.ts, and copy the following code into this file: import { EmotivePoint } from "../shared/models/emotivePoint"; import { Face } from "../shared/models/faces"; import * as FaceApi from "../shared/faceapi"; import * as emojiLookup from "emoji-dictionary"; const EMOJIS_TO_TRAIN = [ "☺️", "🤓", "😃", "😆", "😉", "😍", "😎", "😐", "😕", "😖", "😘", "😜", "😝", "😠", "😧", "😩", "😬", "😭", "😱", "😳", "😴" ]; async function getCalibrationArrayString(context) { let str = "["; for (let emoji of EMOJIS_TO_TRAIN) { context.log(`Processing ${emoji}`); // Given an emoji like 😴 returns the word version, like 'sleepy' let emojiName = emojiLookup.getName(emoji); let emotion = await FaceApi.getEmotionFromLocalProxyImage(context, emojiName); let point = new EmotivePoint(emotion); let face = new Face(point, null); str += `{ emotiveValues: new EmotivePoint({ anger: ${emotion.anger}, contempt: ${emotion.contempt}, disgust: ${emotion.disgust}, fear: ${emotion.fear}, happiness: ${emotion.happiness}, neutral: ${emotion.neutral}, sadness: ${emotion.sadness}, surprise: ${emotion.surprise} }), emojiIcon: "${emoji}" },`; } str += "]"; return str; } export async function index(context, req) { context.log(`Calibrate HTTP trigger`); const array = await getCalibrationArrayString(context); const body = {MOJIS: array}; context.res = { status: 200, headers: { "Content-Type": "application/json" }, body: body }; } Try it out Now comes the fun part! We are going to run each of the images in the shared/proxy-images through the Face API to calculate an emotional point for that emoji in emotional space. Make sure the function app is running by starting it from the debug menu or running it from the terminal. func host start Run the new Calibrate function by connecting to:. Note The Face API restricts the rate at which it can be called. If the rate limit is exceed the code will wait 30 seconds and try again. Since the calibrate function is make calls in quick succession you may hit this rate limit, and see that the Calibrate function takes a minute or two to execute. The output of the command is displayed in your browser window, as json array of emotive points. You can try out different proxy images, and see the change in the emotive points. You can also copy this array back into shared/models/mojis.ts, redeploy your function app, so that you are using your own proxy images in your slack /mojify command. Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/replace-faces-with-emojis-matching-emotion/9-optionally-calibrate-emojis
CC-MAIN-2019-26
refinedweb
650
57.87
QtonPi - Screen Dimensions / Reconnect HDMI Hello, I downloaded the lastest QtonPi-Image and installed it. Everything works fine and the animations run great with only 30% CPU usage. There are only two problems I need assistance with: Is it possible to detect the screen height and width? (resolution of the screen) I have to set width/height of my root element to fit the screen, otherwise it will be positioned bottom left and the rest of the screen is black. In main.cpp I use "view.showFullScreen();", but I still have to set the dimensions. When I unplug the HDMI-Cable or turn off the screen and turn it on again, there is no signal. Is it possible to (re-)detect the HDMI-screen when pluggin in / turning on the screen? (Probably this is more a linux based problem?) Thanks! - sidewinder [quote author="neuni" date="1347451178"]... In main.cpp I use "view.showFullScreen();", but I still have to set the dimensions. [/quote] Have you tried setting "resizeMode": to "SizeRootObjectToView":? My main.cpp: @#include <QtGui> #include <QtQuick> int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); QQuickView view; view.setSource(QUrl(QCoreApplication::applicationDirPath() + "/qml/main.qml")); view.setResizeMode(QQuickView::SizeRootObjectToView); view.showFullScreen(); return app.exec(); } @ Puristic...it's more JS and QML. Did you ever get this to work? I'm having the same issue. No, I never got this to work. I used the bakepi-script to build my own raspbian/qt5 image. Now everything works fine. The latest script is very easy to use: There are also many messages regarding the script at this mailing list:
https://forum.qt.io/topic/19782/qtonpi-screen-dimensions-reconnect-hdmi
CC-MAIN-2018-26
refinedweb
268
68.97
Feature #4946 Building libruby-static.a with option nodynamic and --with-static-link-ext doesn't disable dlopen Description I uncomment option nodynamic in ext/Setup and run ./configure --with-static-link-ext, then make libruby-static.a I then compile this: /* trial.c */ #include "ruby.h" int main(){ ruby_init(); return 0; } like so: gcc -c trial.c -o trial.o -Iruby-1.9.2-p180/include -Iruby-1.9.2-p180/.ext/include/i686-linux gcc -static trial.o -Lruby-1.9.2-p180 -lruby-static -lpthread -lrt -ldl -lcrypt -lm -o trial And I see this warning: ruby-1.9.2-p180/libruby-static.a(dln.o): In function `dln_load': /home/alex/projects/scratch/ruby/ruby-1.9.2-p180/dln.c:1276: warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking I specifically do not want any run-time code loading capability, but I can see no documented way to disable dlopen; is there any way to silence this warning? Associated revisions History #1 Updated by Nobuyoshi Nakada almost 4 years ago - Tracker changed from Bug to Feature There is no way to disable dynamic loading now. --with-static-linked-ext just let all bundled extension libraries linked statically. #2 Updated by Yui NARUSE almost 4 years ago - Target version changed from 1.9.2 to 2.0.0 #3 Updated by Yusuke Endoh about 3 years ago - Status changed from Open to Assigned - Assignee set to Nobuyoshi Nakada - Priority changed from Normal to Low #4 Updated by Nobuyoshi Nakada over 2 years ago - Status changed from Assigned to Feedback Can't you disable it with the following? ac_cv_func_dlopen=no configure #5 Updated by Nobuyoshi Nakada over 2 years ago - Status changed from Feedback to Closed - % Done changed from 0 to 100 Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/4946
CC-MAIN-2015-22
refinedweb
311
56.86
This tutorial is meant to be a startup tool for student unfamiliar with Microsoft Visual C++. It's purpose is to be a basic introduction to creating files and using the debugger, additional information on the more in depth features of the Visual C++ program can be found in the help files. The first thing to do when beginning to use Visual C++ is to create a workspace. Go to the file menu and click on New. This will bring up the following window. If it is not already there click on the Workspace tab Type in a name for your workspace and click OK. I usually make one workspace for each class and then keep each assignment as a project in that workspace, but you can use just one workspace if you want to. Next, you need to create a new project. For this, you also want to go to the New menu, but this time click on the Projects tab. Unless you are creating a program which will use graphics, you will probably want to create a Win32 Console Application. Give the project a name and click "Add to current workspace" if it is not already selected. Then click OK. After that a window will come up asking what kind of Console Application you want. Make sure "empty workspace" is selected and then click Finish. Click OK in the next window and you have created a new project. The project and its files will appear in the workspace window (the one in the upper left corner). For example, I created a project called demo in my workspace and now the screen looks like this: Next, we will write some code for the new project. First, we will create a header file for a new class called Complex. Bring up the File menu and click on New. Then click on the tab that says files if it is not already selected. Chose C/C++ Header File and then type the name Complex into the name field. Also, make sure that "add to project" is checked. Now, copy the following lines into the text filed of the screen. class Complex { private: double real; double imaginary; public: Complex(); Complex(double,double); double getReal(); double getImaginary(); void setReal(double); void setImaginary(double); }; The screen should look like this. Save the file either by bring up the file menu and clicking on save, or clicking the icon of a disk right below the Edit menu. Now create a new C++ source file and copy the following into the text box. When you cut and paste from this page the alignment of the lines will not be correct, but this will not affect the programs functionality. #include "Complex.h" Complex::Complex() { real=0; imaginary=0; } Complex::Complex(double r, double i) { real=r; imaginary=i; } double Complex::getImaginary() { return imaginary; } double Complex::getReal() { return real; } void Complex::setReal(double r) { real=r; } void Complex::setImaginary(int i) { imaginary=i; } Save this new file. Bring up the build menu and click on compile. As you can see there is an error. Errors that occur during the compiling of code are the first type of errors that you can fix with this program. This one is relatively simple. Double click on the error description and you will be taken to where the error is. Replace the highlighted "int" with "double" and try to compile again. This time it should work. Now, create a new C++ source file and call it main. Then type, rather than just copying, the following lines #include <iostream.h> #include "Complex.h" int main() { Complex a,b; a.setReal(25); Notice that when you type a. a list of the functions and variables of the complex class will drop down. This is a very handy feature. (If this list doesn't appear then go to the Tools menu, click on Options. Under the Editor tab there are boxes which will turn on and off the auto-complete features.) Now write the rest of these lines. a.setImaginary(2.5); cout << "a = " << a.getReal() << " + " << a.getImaginary() << "i" << endl; return 0; } Save this file and then go to the build menu and select build. This will compile the necessary files and link them together into an executable file. (Note: if you ever want to exclude one of the files in the project from the build, right click on the file in the workspace window, and select settings. This will bring up the project settings window, click on the general tab and then select "Exclude file from build") Now that we have an executable file, we can explore one of the most useful features, the debugger. First, run the program normally by selecting execute from the build menu. A window should pop up that looks like this. Press any key to make that window disappear. Now, make sure main.cpp is in the text window and right click next to a.setReal(25). From this menu select Insert/Remove Breakpoint. There should now be a red dot next to this line. This sets a spot where the program execution should pause while debugging. Go to the build menu, go under start debug and click on go. This should bring up the console window and also change the main window into debugging mode. The screen should now look something like this. There are two new windows for the debugger. The window on the bottom left will show variables that are associated with the current statement. The window on the bottom right contains variables that you want to keep and eye on. Go up to the text window and right click on b. Click on QuickWatch and select AddWatch. This variable will now always be accessible in the bottom right window. Now, we will begin stepping through the program. First, select Step Over (either from the Debug menu, from the icon below the menu bar or F10) This will move to the next line regardless of whether there was a function in the line it was on. If you click on the + next to a in the bottom left window, you will see that the value of real has changed to 25. Now, rather than using Step Over, select Step Into for the next line. This will take you to the function body of the setImaginary function. You will see the value of i in the lower left window, and if you step to the next line imaginary will also show up. Leave this function by selecting Step Out. Finish the program execution either by selecting Go or Stop Debugging from the Debug menu. This concludes the Visual C++ tutorial.
http://www-ee.eng.buffalo.edu/faculty/paololiu/edtech/roaldi/tutorials/c++.htm
crawl-002
refinedweb
1,113
74.08
Pippy From OLPC Description & Goals Summary Teaches Python programming by providing access to Python code samples and a fully interactive Python interpreter. The user can type and execute simple Python expressions. For example, it would be possible for a user to write Python statements to calculate expressions, play sounds, or make simple text animation. The initial build ships with about twenty short Python examples covering various aspects of the language. Goals - To introduce children to computer programming - Give the possibility to the children to collaborate and share while doing computer programming. Collaboration Wiki to contribute Collaboration on this Wiki can help the keeper of Pippy by letting the community offer code snips and code lessons to draw from. In a normal Python program one would not have comments about the language. That rule would be relaxed here to help the student or make a point. Pippy Pippy adventures can be shared with others in the neighborhood via the Sugar user interface. One way to share is to use the "Journal" after quitting a Pippy activity. Resume that activity with "Write" instead of Pippy. Then use the share Activity of Write to share the Plain text with others. There may be other ways too. Examples Please add examples here, or modify the existing ones! Math Apples - Author: Madeleine Ball - About: Adding and dividing - Shows: Print Statements and Basic Math - XOversion: Bundled in 656 print "Let's do math!" print "On Monday I picked 22 apples. On Tuesday I picked 12." print "Now I have: ", 22 + 12 print "My brother says he picked twice as many apples last week." print "This means he picked: ", (22 + 12) * 2 print "I have 3 friends to whom I would like to give apples." print "One third of my apples is about: ", (22 + 12) // 3 print "Or, more exactly: ", (22.0 + 12.0) / 3.0 Pascal's triangle - Author: Madeleine Ball - About: Character graphic of Pascal's triangle - Shows: loops, vectors - XOversion: Bundled in 656 # Pascal's triangle lines = 8 vector = [1] for i in range(0,lines): vector.insert(0,0) vector.append(0) for i in range(0,lines): newvector = vector[:] for j in range(0,len(vector)-1): if newvector[j] == 0: print " ", else: print "%2d" % newvector[j], newvector[j] = vector[j-1] + vector[j+1] print vector = newvector[:] Sierpinski triangle - Author: Madeleine Ball - About: Character graphics of a Sierpinski triangle - Shows: Modifying Pascal's triangle program, loops, vectors - XOversion: Bundled in 656 size = 5 modulus = 2 lines = modulus**size vector = [1] for i in range(0,lines): vector.insert(0,0) vector.append(0) for i in range(0,lines): newvector = vector[:] for j in range(0,len(vector)-1): if newvector[j] == 0: print " ", else: remainder = newvector[j] % modulus if remainder == 0: print "O", else: print ".", newvector[j] = vector[j-1] + vector[j+1] print vector = newvector[:] Times1 - Author: Chris Ball - About: The 4 times table - Shows: Loops, the range statement - XOversion: Bundled in 656 for i in range(1,13): print i, "x 4 =", (i*4) Times2 - Author: Chris Ball - About: Print any times table - Shows: Loops, range, and input - XOversion: Bundled in 656 number = input("Which times table? ") for i in range(1,13): print i, "x", number, "=", i*number Fibonacci Series - Author : Rafael Ortiz - About: The Fibonacci Number Series - Shows: Using tuple assignments. While loop. - XOversion: Bundled in 656 a, b = 0, 1 while b < 1001: print b, a, b = b, a+b Stern-Brocot Tree - Author: User:Walter modifying code from Pythonic Mathematics - About: - Shows: Using class definitions, functions, yield - XOversion: # The Stern-Brocot tree contains every non-negative rational number # expressed in its lowest terms # Reference: Brian Hayes, Computing Science On the Teeth of Wheels, # American Scientist, July-August, Volume 88, No. 4, # define a class for rational numbers: # numerator / denominator class Rational: def __init__(self, num, den): self.num, self.den = num, den def __repr__(self): return str(self.num) + '/' + str(self.den) # define the mediant function # mediant (a/b, c/d) = (a+c)/(b+d) def mediant(a, b): return Rational(a.num + b.num, a.den + b.den) # expand a row by calculating mediants between each pair of elements # e.g., [0/1, 1/0] -> [0/1, 1/1, 1/0] def expand(row): x = [row[0]] for i in range(1, len(row)): x += [mediant(row[i-1], row[i])] + [row[i]] return x # initialize the first row with 0/1, 1/0 # each pass will yield an expanded row def rationals(): row = [Rational(0, 1), Rational(1, 0)] while True: yield row row = expand(row) # iterate through successive rows r = rationals() i = 1 while i<5: print r.next() i+=1 Pythagoras - Author : Rafael Ortiz - About: Uses Pythagorean Theorem to compute longest edge of a triangle - Shows: Import statement, sqrt (square root) function. import math from math import sqrt print "This is the Pythagoras Theorem" a=float(raw_input("Type a =")) b=float(raw_input("Type b =")) c=sqrt((a*a)+(b*b)) print "c =",c Factorize - Author: Reinier Heeres - About: Factoring Numbers with trial divisions - Shows: Appending to arrays, import, sys.stdout import math import sys orignum = input("Enter a number to factorize ") factors = [] num = orignum i = 2 while i <= math.sqrt(num): if num % i == 0: factors.append(i) num /= i i = 2 elif i == 2: i += 1 else: i += 2 factors.append(num) if len(factors) == 1: print "%d is prime" % orignum else: sys.stdout.write("%d is %d" % (orignum, factors[0])) for fac in factors[1:]: sys.stdout.write(" * %d" % fac) print Zeros of a second degree polynomial - Author: Pilar Saenz - About: Zeros of a second grade polynomial, e.g., 3x^2+6x+3. - Shows: Converting strings to float, import, sqrt (square root) import math from math import sqrt print "These are the zeros of a second grade polynomial" a=float(raw_input("Type a =")) b=float(raw_input("Type b =")) c=float(raw_input("Type c =")) aux=b*b-4*a*c; if aux>0: x1=(-b+sqrt(aux))/(2*a) x2=(-b-sqrt(aux))/(2*a) print "x1= " , x1 ,", x2=" ,x2 elif aux==0: print "x= " , -b/(2*a) else: x1=(-b+sqrt(-aux)*1j)/(2*a) x2=(-b+sqrt(-aux)*1j)/(2*a) print "x1= " , x1 , ", x2" , x2 Factorial of a number - Author: Pilar Saenz - About: Prints a factorial - Shows: Defining and calling a function. Casting to int. def factorial(a): fac=a for i in range(1,a): fac=fac*i print a,"!=",fac a=int(raw_input("Type a=")) factorial(a) Greatest common divisor - Author: Pilar Saenz - About: The greatest common divisor of two numbers - Shows: The % (modulo) operator n= input("Enter a number ") m= input("Enter another number ") r=n%m if r!=0: while (r!=0): n=m m=r r=n%m print "The greatest common divisor is ", m Windchill Calculator - Author: Tyler Conlee - About: Calculates the windchill given a temperature and a wind speed. For my high school senior project I learned about Linux and the OLPC Foundation. With the help of my mentor, Bill C. Smith, I learned simple Python programming. This program is one the items I turned in for the project. By placing in the /usr/share/activities/Pippy.activity/data/math directory it can appear under math examples in Pippy. - Shows: The sys.exit function to stop a program, the round function, and Celsius to Fahrenheit conversion in a pratical math example. ###Windchill calculator for XO import sys Tscale = raw_input ("Enter F for Fahrenheit, or C for Celsius: ") if Tscale == 'F' or Tscale == 'f': print "Temperature is in Fahrenheit" elif Tscale == 'C' or Tscale == 'c': print "Temperature is in Celsius" else: print "Invalid temperature scale" ## sys.exit is used to stop the program. sys.exit(1) T = input("Enter a temperature: ") if Tscale == 'C' or Tscale == 'c': T = (9.0/5.0) * T + 32.0 if T > 50.0: print "Temperature must be <= 50F/10C" sys.exit(1) WSPD = input ("Enter the wind speed (mph):") if WSPD < 3.0: print "Wind speed must be >= 3 mph" sys.exit(1) WCL = 35.74 + 0.6215 * T - 35.75 * (WSPD**0.16) + 0.4275 * T * (WSPD**0.16) WCLC = (WCL - 32.0) * (5.0/9.0) # Round is used to round the answers. In this case to two decimal places. print "Windchill =", round (WCL, 2), "Fahrenheit" print "Windchill =", round (WCLC, 2),"Celsius" Compute pi! Author: Travis Hall Extended By: Tom Mitchell Simple pi computation demo, does 1000 loops... to do more loops change the breakpoint varible to a higher number.Travis Hall I have added a couple lines to explore various ways that floating point numbers might be printed. Tom Mitchell a,b = 1.0,3.0 breakpoint = 1000 pi = 0.0 for loop in range(1, breakpoint): pi += (4.0/a) - (4.0/b) a += 4 b += 4 print pi # Now that the result has been computed we can explore printing the result. print "There are multiple ways to print numbers here is a quick sample." print "Just print it :", pi print "Using repr() :", repr(pi) print "Our approximation: %3.20f" % pi print "\nPi is a very famous number...." # Use python's math module it is faster and close enough for most computations. import math print "Python's Math library computes" print "a better value pi: %3.39f" % math.pi; # it uses...(math.atan(1.0) * 4.0) # when running computations based on "pi" it is good to begin with the best value you can get. # from the gnu 'C' comiler /usr/include/math.h" print "For reference a more exact 32 bit floating point value for pi is." print "Known value of pi: 3.14159265358979323846" Python Function - Author: Chris Ball - XOversion: Bundled in 656 def square(x): print x * x square(3) square(4) If - Author: Chris Ball - XOversion: Bundled in 656 number = input("Enter a number: ") if number > 5: print "Greater than 5" elif number < 5: print "Less than 5" else: print "Number is 5!" Count backwards with Recursion - Author: Mel Chua with comments by Tom Mitchell - XOversion: Bundled in 656 In a UCSD Pascal manual the classic glossary description for "recursion" was "see recursion". This is sort of inside joke is an example of recursion not an informative description. An external link to Wikipedia is more informative. Note how "countbackwards" is defined to use itself. Recursion can be a very clear and precise way to express some things in a program. It does have some disadvantages. Each time a recursive function calls itself the computer must push a return call onto the stack. If the recursion is very deep, stack exhaustion is possible. # Note this assumes you understand functions and if-else. def countbackwards(number): print "I have the number", number if number > 0: print "Calling countbackwards again!" countbackwards(number-1) else: print "I am done counting" number = input("Enter a number: ") countbackwards(number): While Author Pilar Saenz n=input("enter a number") while n>0: print n, " ", n-=1 print "Surprise!\n" Title Case Capitalisation Author: Alan Davies # This is an example of a list comprehension oldtitle = "this TITLE iS NOW coRRecTly CAPItalised" oldwords = oldtitle.split() newwords = [word.capitalize() for word in oldwords] newtitle = " ".join(newwords) print "Before:", oldtitle print "After:", newtitle Names Drawn From a Hat Author: Alan Davies # Simple and possibly useful program for # drawing names in a random order from a hat import random names = [] name = raw_input("Enter the first name to go in the hat:") while name != "": names.append(name) name = raw_input("Enter another name, leave blank if you have finished:") random.shuffle(names) print "The random order from the hat is:" for x,name in enumerate(names): print x+1, name String Hello1 Author: Chris Ball print "Hello everyone!" Hello2 - Author: Chris Ball - XOversion: Bundled in 656 name = raw_input("Type your name here: ") print "Hello " + name + "!" Thanks Author: Walter Bender Comment: Please add names as appropriate import random from random import choice, the ChiMei team..', 'Firmware': 'Ron Minnich, Richard Smith, Mitch Bradley, Tom Sylla, Lilian Walter, Bruce Wang..', 'Kernel & Drivers': 'Jaya Kumar, Jon Corbet, Reynaldo Verdejo, Pierre Ossman, Dave Woodhouse, Matthew Garrett, Chris Ball, Andy Tanenbaum, Linus Torvalds, Dave Jones, Andres Salomon, Marcelo Tosatti..', 'Graphics systems': 'Jordan Crouse, Daniel Stone, Zephaniah Hull, Bernardo Innocenti, Behdad Esfahbod, Jim Gettys, Adam Jackson, Behdad Esfahbod..', 'Programming': 'Guido Van Rossum, Johan Dahlin, Brian Silverman, Alan Kay, Kim Rose, Bert Freudenberg, Yoshiki Ohshima, Takashi Yamamiya, Scott Wallace, Ted Kaehler, Stephane Ducasse, Hilaire Fernandes..', 'Sugar': 'Marco Pesenti Gritti, Dan Williams, Chris Blizzard, John Palmieri, Lisa Strausfeld, Christian Marc Schmidt, Takaaki Okada, Eben Eliason, Walter Bender, Tomeu Vizoso, Simon Schampijer, Reinier Heeres, Ben Saller, Miguel Alvarez..', 'Activities': 'Erik Blankinship, Bakhtiar Mikhak, Manusheel Gupta, J.M. Maurer (uwog) and the Abiword team, the Mozilla team, Jean Piche, Barry Vercoe, Richard Boulanger, Greg Thompson, Arjun Sarwal, Cody Lodrige, Shannon Sullivan, Idit Harel, and the MaMaMedia team, John Huang, Bruno Coudoin, Eduardo Silva, H&?kon Wium Lie, Don Hopkins, Muriel de Souza Godoi, Benjamin M. Schwartz..', 'Network': 'Michael Bletsas, James Cameron, Javier Cardona, Ronak Chokshi, Polychronis Ypodimatopoulos, Simon McVittie, Dafydd Harries, Sjoerd Simons, Morgan Collett, Guillaume Desmottes, Robert McQueen..', 'Security': 'Ivan Krstic, Michael Stone, C. Scott Ananian, Noah Kantrowitz, Herbert Poetzl, Marcus Leech..', 'Content': 'SJ Klein, Mako Hill, Xavier Alvarez, Alfonso de la Guarda, Sayamindu Dasgupta, Mallory Chua, Lauren Klein, Zdenek Broz, Felicity Tepper, Andy Sisson, Christine Madsen, Matthew Steven Carlos, Justin Thorp, Ian Bicking, Christopher Fabian, Wayne Mackintosh, the OurStories team, Will Wright, Chuck Normann..', 'Testing': 'Kim Quirk, Alex Latham, Giannis Galanis, Ricardo Carrano, Zach Cerza, John Fuhrer..', 'Country Support': 'Carla Gomez Monroy, David Cavallo, Matt Keller, Khaled Hassounah, Antonio Battro, Audrey Choi, Habib Kahn, Arnan (Roger) Sipitakiat', 'Administrative Support': 'Nia Lewis, Felice Gardner, Lindsay Petrillose, Jill Clarke, Julia Reynolds, Tracy Price, David Robertson, Danny Clark', 'Finance & Legal': 'Eben Moglen, Bruce Parker, William Kolb, John Sare, Sandra Lee, Richard Bernstein, Jaclyn Tsai, Jaime Cheng, Robert Fadel, Charles Kane (Grasshopper), Kathy Paur, Andriani Ferti', 'PR and Media': 'Larry Weber, Jackie Lustig, Jodi Petrie, George Snell, Kyle Austin, Hilary Meserole, Erick A. Betancourt, Michael Borosky, Sylvain Lefebvre, Martin Le Sauteur', 'Directors & Advisors': 'Howard Anderson, Rebecca Allen, Ayo Kusamotu, Jose Maria Aznar, V. Michael Bove, Jr., Rodrigo Mesquita, Seymour Papert, Ted Selker, Ethan Beard (Google); John Roese (Nortel); Dandy Hsu ); Joe Jacobson (MIT Media Lab); Steve Kaufman (Riverside); and Tom Meredith (MFI)', 'Pippy': 'Chris Ball' } print "OLPC would like to take this opportunity to acknowledge the community of people and projects that have made the XO laptop possible." subsystem = random.choice(table.keys()); print subsystem, '\t',table[subsystem] Graphics Jump - Author: C. Scott Ananian - XOversion: Bundled in 656 # both of these functions should be in the 'basic' package or some such def clear_scr(): print '\x1B[H\x1B[J' # clear screen, the hard way. def wait(): import time time.sleep(0.1) # jumping man! # was having to escape the backslash which was rather unfortunate, # now using python's r" strings which were meant for regex's # i didn't have to do that in C64 BASIC for i in xrange(50): clear_scr() print r"\o/" print r"_|_" print r" " wait() clear_scr() print r"_o_" print r" | " print r"/ \ " wait() clear_scr() print r" o " print r"/|\ " print r"| |" wait() clear_scr() print r"_o_" print r" | " print r"/ \ " wait() Mandelbrot Set Author: Alan Davies # Text-based Mandelbrot set generator # Play with the values of 'centre' and 'realsize' # to explore the set. centre, realsize, maxiter = -.7+0j, 2.8, 50 width, height, aspect = 60, 30, 1.9 charmap = "abcdefghijklmnopqrstuvwxyz" for y in range(height): output = "" for x in range(width): real = (float(x)/width-.5)*realsize imag = (float(y)/height-.5)*aspect*realsize*height/width z = c = complex(real, imag) + centre iterations = 0 while abs(z) < 2 and iterations < maxiter: z = z**2 + c iterations += 1 if iterations == maxiter: output += " " else: output += charmap[iterations%len(charmap)] print output Games Guess a number - Author: Pilar Saenz - XOversion: Bundled in 656 import random from random import randrange R = randrange(1,100) print "Guess a number between 1 and 100!!!" N = int(raw_input("Enter a number: ")) i=1 while N!=R: if N>R : print "Too big... try again" else : print "Too small.. try again" N = input("Enter a number: ") i+=1 print "You got it in ", i, "tries" Robots Author: Alan Davies This is a playable implementation of Robots (also known as Daleks). I tried to keep the code clear and well commented, even at the expense of space. I also made sure that the lines don't wrap in Pippy, as that looks quite ugly. I'm not sure if this is considered too large for the samples to be included with Pippy- a simpler implementation could be trimmed down considerably. I figured it might be nice to have at least one complete implementation for kids and adults to play with. from random import randint import curses stdscr = curses.initscr() curses.noecho() xmax, ymax, alive = 60, 10, True commands = {"q":(-1,-1), "w":(0,-1), "e":(1,-1), "a":(-1,0), "s":(0,0), "d":(1,0), "z":(-1,1), "x":(0,1), "c":(1,1), " ":(0,0), "t":(0,0)} def message(text, yoffset=0, wait=True): stdscr.addstr(ymax/2+yoffset, xmax/2-len(text)/2, text) stdscr.refresh() if wait: stdscr.getch() message("Welcome to Robots!", -3, False) message(" QWE Screwdriver: S ", -1, False) message("Movement: A D Teleport: T ", 0, False) message(" ZXC Do nothing: Spacebar", 1, False) message("Press any key...", 3) while True: level = 1 alive = True while alive: # Initialise powers, hero position, and enemy lists teleport = screwdriver = True hero = (xmax/2, ymax/2) scrap = [(randint(0, xmax), randint(0, ymax)) for dummy in range(level/3+3)] robots = [(randint(0, xmax), randint(0, ymax)) for dummy in range(level*4-3)] scrap = [s for s in scrap if s != hero] robots = [r for r in robots if r != hero] while True: # move crashed robots to scrap list scrap += [r for r in robots if robots.count(r) >= 2] robots = [r for r in robots if r not in scrap] # draw the screen stdscr.clear() stdscr.addstr(hero[1], hero[0], "@") for robot in robots: stdscr.addstr(robot[1], robot[0], "$") for scr in scrap: stdscr.addstr(scr[1], scr[0], "#") if screwdriver: stdscr.addstr(ymax-1, xmax+1, "S") stdscr.addstr(ymax, xmax+1,"T" if teleport else "") stdscr.refresh() # test for win or loss if len(robots) == 0: break elif hero in robots or hero in scrap: stdscr.addstr(hero[1], hero[0], "X") message("You lost! Press any key...") alive = False break # get a valid keypress key = "" while key not in commands: key = chr(stdscr.getch()).lower() # teleport - move to a random location if teleport and key == "t": teleport = False hero = (randint(0, xmax), randint(0, ymax )) # sonic screwdriver - scraps nearby robots if screwdriver and key == "s": screwdriver = False scrap += [robot for robot in robots if abs(robot[0] - hero[0]) <= 1 and abs(robot[1] - hero[1]) <= 1] robots = [r for r in robots if r not in scrap] # update hero and robot positions hero = (hero[0] + commands[key][0], hero[1] + commands[key][1]) def sign(x): return cmp(x, 0) def follow(fr, to): return (fr[0] + sign(to[0]-fr[0]), fr[1] + sign(to[1]-fr[1])) robots = [follow(robot, hero) for robot in robots] # move to next level if alive: level += 1 message(" Level %d! Press any key... " % level) Beginning Programming Operations on Numbers - Author: Tom Mitchell - About: addition, subtraction, multiply, divide. power(exponents) and modulo arithmetic - Shows: basic arithmetic with integer and floating point numbers. # Understanding how a computer works with numbers is interesting. # Python (Pippy) supports the basic set of operators on integers, floating point, decimal numbers and more. # This exercise will look at integers and floating point numbers. # Look at these samples and when you think you have the answer click run. # Change the numbers and experiment... # Add print 6 + 1 # Subtract print 6 - 1 # Multiply print 6 * 2 # Division - hint, dividing integers by integers result in integers. print 3/6 print 3.0/6.0 print 1/3 # integer print 1.0/3.0 # floating point # Modulo (remainder).. print 1%3 print 3%6 print 1.1234%3.0 print 3%2 print 3.0%2 # Power or exponents print 2**2 print 3**6 print 3.3**22 Parentheses and Operators - Author: Tom Mitchell - About: Parentheses with addition, subtraction, multiply, divide, power(exponents) and modulo arithmetic - Shows: basic arithmetic precidence rules and the use of parentheses. # Understanding the order with which a computer works with numbers is interesting. # Python (Pippy) supports the basic set of operators on integers, floating point, decimal numbers and more. # This exercise will look at Pythons rules for basic integers and floating point operations and the use of parentheses. # Note the assignment to a variable and then the use of print to display a result. # Look at these samples and when you think you have the answer click run. # Change the numbers and experiment. # Regroup numbers and operators with parentheses to clarify what you want Python to do. # Add and Multiply some numbers but what is done first: # is is multiply or add; # is the order left to right or is it right to left. answer = 6 + 2 * 3 print answer answer = (6 + 2) * 3 print answer answer = 6 + (2 * 3) print answer # now exponents.. answer = 3.3**(20+2)+10 print answer answer = 3.3**(20)+ -(2+10) print answer # To convert from degrees Fahrenheit to Centigrade we can use an equation. F = 212 # but which equation is correct: C1, C2, C3 or C4? C1 = (F - 32.0)/ 9.0 * 5 C2 = F - 32.0/ 9.0 *5 C3 = F - (32.0/ 9.0) *5 C4 = F - 32.0/ (9.0 *5) print C1 print C2 print C3 # Why are C2 and C3 the same. print C4 # hint Water boils at 212 F and 100 C; Water freezes at 0 C and 32 F. F = 32.0 C = (F - 32.0)/ 9.0 * 5 print C print "%f F degrees converts to %3.10f C" %( F ,C) # Question what is interesting about -40 degrees? """ In Algebra one might write: 3 + 3 + 5 + 55 x = ---------------------- 22 """ # The next line is incorrect because the division only involves 55 x = 3.0 + 3 + 5 + 55 / 22 # This line is correct xx = (3.0 + 3 + 5 + 55) / 22 print "This is incorrect %i" % x print "This is correct %i" % xx # Question what is interesting about -40 degrees? # Hint -- use parentheses to clarify what you want Python to do. # Doing so you can make it clear to others what you intend. Take Pippy for a Loop - Author: Tom Mitchell - About: Looping - Shows: While and for loops introduces a generator. #! /usr/bin/python """ The Gauss Schoolroom Anecdote: A teacher once was inclined to assign long tedious problems to students. One problem was to add all the numbers from 1 to 100 or more. A student "Gauss" turned over his slate moments after starting the problem to signal that he was finished. His slate had a single number on the back ... 5050. How did he solve this so quickly? Who is "Gauss"? """ # lets explore loops to check the answer. # first a "while loop". i = 1 a = 0 while i <= 100: a += i i += 1 print a # now a "for loop" using the generator range() a = 0 for i in range(100+1): # the +1 is because range() returns a list N long a += i # that begins with 0. More on generators like range(N) later. print a # Will this get the correct answer? If so why? last = 100.0 print last * (1 + last ) / 2 """ Additional reading """ Loops do not need to be restricted to numbers. Try this little loop. for tree in "oak", "maple", "plum": print tree Printing various Number Types in Pippy - Author: Tom Mitchell - About: Formatting when printing numbers - Shows: type(), if, elif, else and the use of % in formatting of numbers. #! /usr/bin/python import decimal a = 1.5 b = 55 c = -55.12345 d = True e = 5555555555555555555555555555L f = 4444444444444444444444444444.12345678 g = 55.77e22 h = 'string' i = decimal.Decimal("3333333333333333333333333333.12345678") j = decimal.Decimal("2222222222222222222222222222.12345678") k = i * j k = k * k * k # should now be very big l = 0567 # hint this is octal i.e. base 8 m = 0x456ABCD # base 16 i.e hex for thing in [ a, b, c, d, e, f, g, h, i, j, k, l, m ]: print "========================" print "printing it ",thing # print type(thing) if type(thing) is float: print "found a float", thing print "float can be formatted: %f " % thing print "float can be formatted: %4.9f " % thing print "float can be formatted: %1.5f " % thing print "float can be formatted: %1.3e " % thing print "float can be formatted: %1.1E " % thing elif type(thing) is int: print "found a int ", thing print "int can be formatted: %i # right justified" % thing print "int can be formatted: %+25i # right justified with sign" % thing print "int can be formatted: %-25i # left justified" % thing print "int can be formatted: %+25i # right justified with sign" % thing print "int can be formatted as hex: %-+20x # Hex left justified with sign" % thing print "int can be formatted as hex: 0x%-20x # Hex left justified make it look like Hex " % thing print "int can be formatted as hex: 0x%+20X # Hex signed right justified bad try at looking like Hex" % thing print "int can be formatted as hex: %#-20X # Hex left justified make it look like Hex best way" % thing print "int can be formatted as hex: %#20X # Hex right justified make it look like Hex best way" % thing print "int can be formatted as hex: %#+20X # Hex signed right justified make it look like Hex best way" % thing print "int can be formatted as octal: %o # Octal" % thing print "int can be formatted as octal: 0%o # Octal make it look octal " % thing print "int can be formatted as octal: 0%-15o # Octal left justified, make it look octal incorrectly " % thing print "int can be formatted as octal: 0%15o # Octal make it look octal incorrectly and why" % thing print "int can be formatted as octal: %#o # Octal make it look octal correctly" % thing print "int can be formatted as octal: %#15o # Octal make it look octal correctly" % thing print "int can be formatted as octal: %#-15o # Octal left justified correctly" % thing elif type(thing) is long: print "found a long ", thing print "long can be formatted: %i " % thing # right justified is the default print "long can be formatted: %+i signed " % thing # right justified with sign print "long can be formatted as hex: %x " % thing # Hex print "long can be formatted as hex: 0x%x " % thing # Hex lower case print "long can be formatted as octal: %o " % thing # Octal print "long can be formatted as octal: %#o " % thing # Octal make look like octal elif type(thing) is str: print "found a str ", thing elif type(thing) is decimal.Decimal: print "found a decimal.Decimal ", thing print "decimal.Decimal can be formatted: %1.5f " % thing print "decimal.Decimal can be formatted: %1.3e " % thing print "decimal.Decimal can be formatted: %1.3E " % thing print "decimal.Decimal can be formatted: %55.55e " % thing else: print "this type is not in my list" # this will trip on the boolian "True" for d. """ Make changes to the above and see what changes. Shorten the list to focus on one type or another. Formating of numbers builds on the C library so the % formats are best described by looking at the printf man page. Use the web and search for: PRINTF. .... and lots more ... """ Comparing objects in Pippy - Author: Tom Mitchell - About: Comparisons and tests >, <, <=, >=, ==, not - Shows: Shows some very introductory ways to build truth tables. This is a bit long but the goal here is to show how to build some skills with tests and truth tables. #! /usr/bin/python # Comparison tests are the key to program flow and decisions. # All objects can be compared.... """ from the Python Library Reference. < strictly less than <= less than or equal > strictly greater than >= greater than or equal == equal != not equal (prefered over <> ) <> not equal (not prefered) is object identity is not negated object identity Boolean Operations x or y if x is false, then y, else x (1) x and y if x is false, then x, else y (1) not x if x is false, then True, else False (2) 1) These only evaluate their second argument if needed for their outcome. (2) "not" has a lower priority than non-Boolean operators, so not a == b is interpreted as not (a == b), and a == not b is a syntax error. """ a = 1 b = 2 # could also do "a, b = 1, 2" # - - - - - - - - - - - - - - - - - - - - print "\n===============================" print "Control program flow with an if" if a < b : print "Less Than" # - - - - - - - - - - - - - - - - - - - - print "\n===============================" print "Looping while a test stays true" a, b = 1, 4 while a < b: print "a is still less than b" b -= 1 # same as "b = b -1" # - - - - - - - - - - - - - - - - - - - - print "\nBreak out of a loop when a test becomes true" a, b = 1, 4 while a < b: print "a is still less than b" b += 1 c = b # One common error in JavaScript and C is confusing # assignment '=' and a test for equality '=='. # Uncoment this block and see what happens. # if c = 10: # print "Exit on assignment not equality" # break if b == 10: print "Exit the loop \'b\' is now ten." break # - - - - - - - - - - - - - - - - - - - - print "\nBreak out of a loop when a test becomes true" a, b = 1, 4.0000000001 while a < b: print "a is still less than b" b += 1 c = b # testing floating point numbers for exact equality is problematic. if b == 10: print "Exit the loop \'b\' is now ten." break if b >= 10: print "Exit the loop \'b\' is now ten or more." break # - - - - - - - - - - - - - - - - - - - - T, F = True, False # what is the truth table for 'or', 'and', 'not' ? print "\n===============================" print "Exploring Boolean\'s \'or\'." if F or F: print "F or F tests true" if T or F: print "T or F tests true" if F or T: print "F or T tests true" if T or T: print "T or T tests true" # - - - - - - - - - - - - - - - - - - - - print "\n===============================" print "Exploring Boolean\'s \'and\'." if F and F: print "F and F tests true" if T and T: print "T and T tests true" if T and F: print "T and F tests true" if F and T: print "F and T tests true" # - - - - - - - - - - - - - - - - - - - - print "\n===============================" print "Exploring Boolean\'s \'not\'." if not(F or F): print "not(F or F) tests true" if not(T or T): print "not(T or T) tests true" if not(T or F): print "not" if not F or F: print "not F or F tests true" if not T or T: print "not T or T tests true" if not T or F: print " # - - - - - - - - - - - - - - - - - - - - print "\n===============================" print "Exploring comparisons of strings'." lc="lower case string" UC="Upper Case string" if lc == UC: print "String test:\n \"%s\" is the same as \"%s\" ." % (lc, UC) else: print "String test:\n \"%s\" is NOT the same as \"%s\" " % (lc, UC) Lists, Tuples, Sets and more - Author: Tom Mitchell - About: Lists, tuples, sets and some string tinkering - Shows: Introduction to ways to define and convert from one to the other. #! /usr/bin/python """ Lists, Tuples, Sets and Dictionaries: Data structures are fundamental to programing. In python most data structures are built with tuples, sets, lists and dictionaries. Data structures are how data is organized and passed to and from functions. """ # lists are bounded by "[]" a sequence of objects that can be modified. print "\nExploring Lists:" Bob, Carol, Ted = "Bob Johnson", "Carol Johnson", "Ted Turner" somenames = [ Bob, Carol, Ted ] sometrees = ["teak", "oak", "pine"] sometrees = sometrees + ["maple", "apple"] print sometrees sometrees[2] = "willow" # legal for lists print sometrees # tuples are bounded by "()" , cannot modify a tuple's contents. print "\nExploring tuples:" nums = (1, 2, 3, 5.01) trees = ("teak", "oak", "pine") bushes = "blueberry", "Sage" # when expected a tuple is assumed. tuptup = (nums, trees, bushes) # This is illegal for a tuple. Uncomment it and try. #nums[2] = 55 print nums, trees, bushes # print EXPECTS a tuple print (nums, trees, bushes, "Tuple") # print EXPECTS a tuple print tuptup # sets are introduced by the key word "set", and no duplicates no special order print "\nExploring sets:" aa = set("Happy is Pippy") print aa # how many p's and y's will be printed? # dictionaries are bounded with {} and use : with , to associate pairs. print "\nExploring dictionaries:" telescope = { 'mirror' : 'reflection', # three ways to indicate strings ' "lens" : "refraction", # " """Hubble""" : """reflector""", # and the triple """ "Cassegrain" : "reflector", "Great Lick" : "refractor" } print telescope["Hubble"] # strings are are bounded with ' or " or """. It makes some sense # to include them here. print "\nExploring strings:" # The following string is interesting in English, but why? s1 = "The quick brown FOX jumped over the lazy dogs" print s1 # some things are easy to do with strings. print s1[4:9] print s1.upper() print s1.swapcase() print s1.title() # sorting and moving the characters about is not one of them. # sorting is easy to do in a list. sL = s1.lower() l1= list(sL) l1.sort() print l1 print "" s2= "".join(l1) # "" is the (empty) string seperating the objects providing the method sX= "X".join(l1) # "X" is the string seperating the objects providing the method print s2 print sX # sets can only have one of each. t1=set(l1) print t1 L2= list(t1) L2.sort() S1="".join(L2) print "It might be clear what is fun about the phrase now.\n" + S1 Functions lots of functions - Author: Tom Mitchell - About: functions - Shows: Shows lots function permutations #! /usr/bin/python """ # Functions are a critical part of programming. # This File explores some of the ways to pass # information into a function and back out. # # The design of a program and function often centers on how data # is passed into and out of the function. This is commonly refered to # as the "interface". Clearly documenting the interface is important # and can be as simple as well chosen function and variable names, or coments. # An interface has three key parts: input, output and error. # # A function is introduced by the opertor "def" # # def FunctionName(tupple of inpupt ): # indented code # more indented code # Watch out for tabs, # a tab is one character but may look like up to 8 spaces. # more indented code.... Smart editors may confuse the handling of tabs. # Python does expand tabs and assumes up to 8 char. Good style is to # use one or the other. i.e. Try to not mix spaces and tabs in python source. # See: # # A function can take advantage of other functions. # An ideal function returns a result and does not modify data # outside of it's own scope. """ # - - - - - - - - - - - - - - - - - - - - - - - - - - - - # The cannocal Hello World can be writen in Python thusly print "\n================\nExample function: \"Hello\"" def Hello(): print "Hello World\n" # now call it Hello() # - - - - - - - - - - - - - - - - - - - - - - - - - - - - # Simple data can be passed to a function. print "\n================\nExample function: \"Twice\"" def Twice(x): """Twice uses + to operate on a single input object. This is a good place to introduce documentation... The first line should be short and precise. The second doc line should be blank. In this style of documentation the information stays with the object even if it is complied to byte code. """ print x + x Twice(5) Twice("Cat") print (Twice.__doc__) # # - - - - - - - - - - - - - - - - - - - - - - - - - - - - # Multiple data objects can be passed to a function by position. print "\n================\nExample function: \"Volume\"" def Volume(L, W, H): """Length times Width times Height is Volume of a rectangular box""" V = L * W * H return(V) L1, W1, H1 = 5, 6, 2 V1= Volume(L1, W1, H1) print("The volume of a box %i x %i x %i is %i" % (L1, W1, H1, V1)) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - # Multiple data objects can be passed to a function by name. print "\n================\nExample function: \"PetFood\"" def PetFood(cat=0, dog=0, bird=0): # a house cat eats 25 gr of cat food per day. catfood = 25 * cat # a dog eats 100 gr of dog food per day. dogfood = 100 * dog # a bird eats 15 gr of bird food per day. birdfood = 15 * bird print "\n\tHow much Cat, Dog and Bird food?" print "\tWith %i cats we need %i gr of catfood." % (cat, catfood) print "\tWith %i dogs we need %i gr of dogfood." % (dog, dogfood) print "\tWith %i birds we need %i gr of birdfood." % (bird, birdfood) print "\tAll together the full bag should weigh about %i gr." % (catfood+dogfood+birdfood) return (cat, catfood, dog, dogfood, bird, birdfood) PetFood(cat=5, bird=10) # note that 'dog' has a default value of 0. print PetFood(dog=12, cat=5, bird=10) execv() of another python file in Pippy - Author: Tom Mitchell - About: execv() - Shows: Shows how to run a file that might be edited in a terminal Activity" This is a bit advanced but someone asked in the discussions. One value is that is much the way Python programmers work. First in a terminal Activity use vim to generate a short python program like this: #!/usr/bin/python print "this file is /tmp/t1.py" print "Fun with Python and Pippy" Now make it executable: chmod +x /tmp/t1.py Next start Pippy and enter this code: import os f2="/tmp/t1.py" print "================\nGetting ready to leave " os.execv(f2, ("/tmp/t1.py", "Additional args here")) print "N.B. This line is never reached" Slice notation - Author: Stuart Morrow - Shows: The using the slice notation on subscriptable data in Python Some of the more advanced examples on this page just dive in to list/string slicing without explaining. print "A list is an order collection of things" things = ['bike', 'tree', 'the Sun'] print things print print """You can access individual thing using the slice and a number The first thing is numbered 0, not 1""" thing = things[0] print "The third thing is numbered 2" thirdThing = things[2] print thing; print thirdThing print print """You can access more than one thing at a time if you use two numbers and a colon (:) things[start : howMany] finds things[start], then shows howMany things""" twoThings = things[0:2] threeThings = things[0:3] print "twothings =", twoThings, "and threeThings =", threeThings print print """If you specify a third number called the step or increment, you can take howMany things at regular intervals, instead of strictly one next to the other""" things = things * 3 print "things[0:7:1] is:" print things[0:7:1] print "things[0:7:2] is:" print things[0:7:2] print print """A list of numbers will be clearer now that we are learning increments""" numbers = range(5) # This is the same as numbers=[0,1,2,3,4] print numbers[1] # 'range' is English for ... print numbers [1:5:1] print numbers [1:5:2] print numbers [5:1:-1] print print """Words and sentences are subscriptable according to the same rules as lists and tuples""" print "'I am a sentence'[0] is", "I am a sentence"[0] print "'I am a sentence'[7:15] is", 'I am a sentence'[7:15] print """Note that the spaces between words are significant. "We are some words" is the same as ['W','e',' ','a','r','e',' ',...] ^ ^""" print list('We are some words')[0:10] print print """Conveniently, -1 is the last element of a subscriptable and -2 is the second-last, and so on""" print 'I am a sentence'[-1] print 'I am a sentence'[-8:-1] print print """The final section of this list-slicing is about omitting the numbers If you use [:x] it is the same as [0:x]""" print things[:3]; print things[0:3] print """If you do not specify the second index it is the same as saying "until the end of the list" """ print numbers[3:]; print numbers[3: len(numbers) ] print """Therefore, numbers[::] == numbers[:] == numbers""" print numbers; print numbers[:] print "And," string = things[0] print string, string[::-1]; print print "Finally," print "See that string[:n] + string[n:] == string" print string; print string[:1] + string[1:] print """Further Note that things[::-1] is not the same as things.reverse()""" print things.reverse() # Nothing! print things # ah, There it is print """You might liken this to the difference between list.sort() and sorted(list) Slices can be used anywhere a list can, so you can use them in tuple assignment for example""" (a, b) = "abcdefghijklmnop"[:2] print (a, b) More Examples C. Scott Ananian created a small library of programming examples, based on the BASIC examples in the Commodore 64 manual. It can be found at;a=tree See also - Canvas tutorial is a similar environment to learn JavaScript and <canvas> programming, it runs in Browse. Activity Summary Language support > POT Language support > Pootle Projects > Software Projects > Software > XO source Sugar > Activities Sugar > Activities > Bundled
http://wiki.laptop.org/index.php?title=Pippy&oldid=272103
CC-MAIN-2015-40
refinedweb
6,801
70.53
On Thursday, January 1, 2015 12:44:13 PM UTC-6, lucas mvm wrote: > Hey Guys! I need your help! So i want to make a program > like the app called "Feed Me" There are 2 Pictures (I got > that) than you can click the food picture and drag it on > the boys mouth and than the sad boys face disappears and a > happy face shows up. This interactivity will require mastering a few basic events. The Tkinter canvas offers a few extra methods of binding events besides the normal "widget level" event bindings. Since you are beginning you should focus on the ubiquitous bindings at the "widget level" -- master those before moving on to "tag level bindings". First step: FORGET ABOUT PICTURES, because they will only get in your way at this time. First step is to create three events that map mouse clicks, mouse drags, and mouse releases to print messages to stdout. THEN, and only then you can move to the next step! DIVIDE AND CONQUER... in teeny tiny baby steps! But first allow me to critique your code. > Heres the code ive wrtiten > from tkinter import * Oh no... I would not suggest importing everything (even though the library can be) since doing so pollutes your namespace with many names you'll never use. Instead do: "import tkinter as tk" and prefix every class with "tk.". For the constants you do either "from tkinter.constants import *" or individually import only the names you will use (the second option is obviously better) But for now, let's just get this code working, shall we? > f = Tk() Why would you use "f" as a variable for a tkinter window? A better symbol would be "root" or "topwin". Make sure your symbols are describing succinctly what they reference (THIS IS VERY IMPORTANT!) Even though Tkinter allows you to use a "frame" as a "window" (frame.mainloop()), it only works because there is some "magic" going on behind the scenes. You must understand that "frames != windows". > f.title('Give the boy the apple.') > f.geometry('500x500') > c = Canvas(master=f,width=500,height=500,bg='white') > c.place(x=0,y=0) The "place" geometry manager should only be use in *extremely* specific circumstances. And since your code does not require such "needs" i suggest you replace "c.place" with "c.pack(fill=BOTH, expand=YES)". There are three geometry managers in Tkinter: 1. Pack 2. Grid 3. Place When your trying to decide which manager to use, start with the "pack manager" first, if it cannot solve your needs then contemplate the "grid manager", and only after deciding that the "pack manager" and "grid manager" won't work, then you can us the "place manager". > p = PhotoImage(file='traurigsmiley.png') #sad smiley > i = c.create_image(250,320,image=p) #position > p2 = PhotoImage(file='essen.png') #food > i2 = c.create_image(70,100,image=p2) > f.geometry('500x500') > f.mainloop() > > Please Help! First, rewrite your code in the manner i suggest, then expand it with the following. 1. Bind three events to your canvas object: c.bind("<Button>", evtButtonDown) c.bind(...) # < -- homework! c.bind(...) # < -- homework! 1. Create three functions that act as callback for the events: def evtButtonDown(evt): # print the number of the button pressd here! def evtButtonUp(evt): # print the number of the button pressd here! def evtMouseMove(evt): # print the x,y location of the cursor here! ============================================================ RESOURCES: ============================================================
https://mail.python.org/pipermail/python-list/2015-January/683391.html
CC-MAIN-2019-30
refinedweb
569
66.94
Access assets packed via Aspen. See the Aspen documentation for more details. Initial. Add this to your package's pubspec.yaml file: dependencies: aspen_assets: "^0.2.0" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:aspen_assets/aspen_assets.dart'; We analyzed this package on Jun 12, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: web, other Primary library: package:aspen_assets/aspen_assets.dartwith components: 9 hints. Run dartfmtto format lib/aspen_assets.dart. Maintain an example. Create a short demo in the example/directory to show how to use this package. Common file name patterns include: main.dart, example.dartor you could also use aspen_assets.dart.
https://pub.dartlang.org/packages/aspen_assets
CC-MAIN-2018-26
refinedweb
143
54.39
Lets assume we have application and we would like to print logs from it’s execution (like arguments and return code of functions). One way to do it is to hard-code prints into source and recompile it (in Polish language we call it “dupa-debugging”, which pretty nicely evaluates this approach). In Java there is a elegant way to do it – use Aspect Oriented Programming to inject tracing without messing with the existing code (or JVMTI if you would like to do it in runtime). But what if our application is written in native language like C that does not support AOP paradigm or maybe we have no source code, just plain binary? There are several possible solutions like using LD_PRELOAD on Linux or DLL injection on Windows to dynamically replace existing functions. Other solution is to use debugger that will automatically print useful information for us. For example lets trace all executions of malloc and free (this can be useful e.g. for memory leak debugging). I will use Windbg as example but it can be done in gdb as well. To trace execution of function we will set breakpoint with action that will be executed during breakpoint hit. The simplest version of command is: bp MSVCR80D!free ".printf \"free\\n\";g" Command tells to debugger to print “free” after breakpoint and let code execute further (g command). It is not really useful because it does not print addresses of memory that will be freed. So lets fix it: bp MSVCR80D!free ".printf \"%08x free\\n\", dwo(esp+4);g" Expression is little bit more complicated so we will decompose it: .printf \”%08x free\\n\” – prints “X free” where X will be first argument passed to function dwo(esp+4) – computes first argument passed to function. dwo is an expression which returns double word from specified address. During breakpoint we are at the point after return address was pushed on stack but before full stack frame was created. It means that first argument lies at esp+4 memory address (in case of x86 cdecl calling convention – which is our case). Now we will print addresses of allocated memory returned by malloc. We will need to set breakpoint at the end of function when code jumps to return address, then we will print value of eax which is used to pass return value in x86 cdecl convention. Return address is on the stack and can be retrieved in Windbg by reading pseudo register $ra: bp MSVCR80D!malloc "bp /1 @$ra \".printf \\\"%08x malloc\\\\n\\\",eax;g\";g" In command we are setting two breakpoints: first will be executed every time malloc is called. Action for this breakpoint is to set another one-time breakpoint (/1 switch) – at return address on stack. At second breakpoint we will print returned value – which is address of allocated memory. Why second breakpoint? This is because at first breakpoint, value of pseudo register $ra is updated by Windbg to correct value which points at the return address of current function. Before execution of malloc we don’t know what return address will be. 0:000:x86> bp MSVCR80D!free ".printf \"%08x free\\n\", dwo(esp+4);g" 0:000:x86> bp MSVCR80D!malloc "bp /1 @$ra \".printf \\\"%08x malloc\\\\n\\\",eax;g\";g" 0:000:x86> g ... 008d3f80 malloc 008d1190 malloc 008d11d8 malloc 008d1220 malloc 008d1268 malloc 008d3f80 free 008d1190 free 008d11d8 free 008d1220 free ntdll!NtTerminateProcess+0xa: 00000000`777ef97a c3 ret As you can see there were called 5 malloc’s and 4 free’s so memory at address 008d1220 leaked. To trace source of leakage you can modify Windbg commands to print additionally stack trace during malloc: bp MSVCR80D!malloc "bp /1 @$ra \".printf \\\"%08x malloc\\\\n\\\",eax;k;g\";g" After analyzing Windbg log I can see that leak occurred in function leaky!main+0x40: 0018ff30 00411a16 leaky!main+0x40 [c:\users\lastsector\documents\visual studio 2005\projects\leaky\leaky\leaky.c @ 13] 0018ff80 0041185d leaky!__tmainCRTStartup+0x1a6 [f:\sp\vctools\crt_bld\self_x86\crt\src\crtexe.c @ 597] 0018ff88 755d3677 leaky!mainCRTStartup+0xd [f:\sp\vctools\crt_bld\self_x86\crt\src\crtexe.c @ 414] 0018ff94 779b9f42 kernel32!BaseThreadInitThunk+0xe 0018ffd4 779b9f15 ntdll32!__RtlUserThreadStart+0x70 0018ffec 00000000 ntdll32!_RtlUserThreadStart+0x1b Code for debugged application is presented on listing: #include <stdlib.h> int main(int argc, char* argv[]) { void* arr[5]; int i; for(i = 0; i < _countof(arr); ++i) arr[i] = malloc(10); for(i = 0; i < _countof(arr)-1; ++i) free(arr[i]); return 0; } If you would like to do the same but in x64 bits you just need to follow x64 calling convention (e.g. address returned by malloc will be in rax register and address passed to free will be in rcx register).
https://lastsector.wordpress.com/tag/debugging/
CC-MAIN-2015-22
refinedweb
794
65.32
Today we will look into JDBC Statement vs PreparedStatement and some SQL Injection Example. While working with JDBC for database connectivity, we can use Statement or PreparedStatement to execute queries. These queries can be CRUD operation queries or even DDL queries to create or drop tables. Statement vs PreparedStatement Before comparing Statement vs PreparedStatement, let’s see why we should avoid JDBC Statement. JDBC Statement has some major issues and should be avoided in all cases, let’s see this with a simple example. I have Users table in my local MySQL database with following data. Below script will create the table and insert the data for test use. CopyCREATE TABLE `Users` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL DEFAULT '', `email` varchar(20) NOT NULL DEFAULT '', `country` varchar(20) DEFAULT 'USA', `password` varchar(20) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8; INSERT INTO `Users` (`id`, `name`, `email`, `country`, `password`) VALUES (1, 'Pankaj', 'pankaj@apple.com', 'India', 'pankaj123'), (4, 'David', 'david@gmail.com', 'USA', 'david123'), (5, 'Raman', 'raman@google.com', 'UK', 'raman123'); A utility class for creating JDBC Connection to our mysql database. DBConnection.java Copypackage com.journaldev.jdbc.statements; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class DBConnection { public final static String DB_DRIVER_CLASS = "com.mysql.jdbc.Driver"; public final static String DB_URL = "jdbc:mysql://localhost:3306/UserDB"; public final static String DB_USERNAME = "pankaj"; public final static String DB_PASSWORD = "pankaj123"; public static Connection getConnection() throws ClassNotFoundException, SQLException { Connection con = null; // load the Driver Class Class.forName(DB_DRIVER_CLASS); // create the connection now con = DriverManager.getConnection(DB_URL, DB_USERNAME, DB_PASSWORD); System.out.println("DB Connection created successfully"); return con; } } Now let’s say we have following class that asks user to enter the email id and password and if it matches, then prints the user details. I am using JDBC Statement for executing the query. GetUserDetails.java Copypackage com.journaldev.jdbc.statements; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.Scanner; public class GetUserDetails {; Statement stmt = null; ResultSet rs = null; try{ con = DBConnection.getConnection(); stmt = con.createStatement(); String query = "select name, country, password from Users where email = '"+id+"' and password='"+pwd+"'"; System.out.println(query); rs = stmt.executeQuery(query); while(rs.next()){ System.out.println("Name="+rs.getString("name")+",country="+rs.getString("country")+",password="+rs.getString("password")); } }finally{ if(rs != null) rs.close(); stmt.close(); con.close(); } } } Let’s see what happens when we pass different kinds of input to above program. Valid User: CopyPlease enter email id: david@gmail.com User id=david@gmail.com Please enter password to get details: david123 User password=david123 DB Connection created successfully select name, country, password from Users where email = 'david@gmail.com' and password='david123' Name=David,country=USA,password=david123 So our program works fine and a valid user can enter their credentials and get his details. Now let’s see how a hacker can get unauthorized access to a user because we are using Statement for executing queries. SQL Injection: CopyPlease enter email id: david@gmail.com' or '1'='1 User id=david@gmail.com' or '1'='1 Please enter password to get details: User password= DB Connection created successfully select name, country, password from Users where email = 'david@gmail.com' or '1'='1' and password='' Name=David,country=USA,password=david123 As you can see that we are able to get the user details even without having password. The key point to note here is that query is created through String concatenation and if we provide proper input, we can hack the system, like here we did by passing user id as david@gmail.com' or '1'='1. This is an example of SQL Injection where poor programming is responsible for making our application vulnerable for unauthorized database access. One solution is to read the user input and then escape all the special characters that are used by MySQL but that would be clumsy and error prone. That’s why JDBC API came up with PreparedStatement interface that extends Statement and automatically escape the special characters before executing the query. Let’s rewrite above class using PreparedStatement and try to hack the system. GetUserDetailsUsingPS.java Copypackage com.journaldev.jdbc.statements; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Scanner; public class GetUserDetailsUsingPS {; PreparedStatement ps = null; ResultSet rs = null; String query = "select name, country, password from Users where email = ? and password = ?"; try { con = DBConnection.getConnection(); ps = con.prepareStatement(query); //set the parameter ps.setString(1, id); ps.setString(2, pwd); rs = ps.executeQuery(); while (rs.next()) { System.out.println("Name=" + rs.getString("name") + ",country=" + rs.getString("country") + ",password=" + rs.getString("password")); } } finally { if (rs != null) rs.close(); ps.close(); con.close(); } } } Now if we will try to hack the system, let’s see what happens. SQL Injection: CopyPlease enter email id: david@gmail.com' or '1'='1 User id=david@gmail.com' or '1'='1 Please enter password to get details: User password= DB Connection created successfully So we are not able to hack the database, it happened because the actual query that is getting executed is: select name, country, password from Users where email = 'david@gmail.com\' or \'1\'=\'1\' and password='' When we fire a query to be executed for a relational database, it goes through following steps. - Parsing of SQL query - Compilation of SQL Query - Planning and optimization of data acquisition path - Executing the optimized query and return the resulted data When we use Statement, it goes through all the four steps but with PreparedStatement first three steps are executed when we create the prepared statement. So execution of query takes less time and more quick that Statement. Another benefit of using PreparedStatement is that we can use Batch Processing through addBatch() and executeBatch() methods. We can create a single prepared statement and use it to execute multiple queries. Some points to remember about JDBC PreparedStatement. - PreparedStatement returns FORWARD_ONLYResultSet, so we can only move in forward direction. - Unlike Java Arrays or List, the indexing of PreparedStatement variables starts with 1. - One of the limitation of PreparedStatement is that we can’t use it for SQL queries with IN clause because PreparedStatement doesn’t allow us to bind multiple values for single placeholder (?). However there are few alternative approaches to use PreparedStatement for IN clause, read more at JDBC PreparedStatement IN clause. That’s all for the comparison of JDBC Statement vs PreparedStatement. You should always use PreparedStatement because it’s fast, object oriented, dynamic and more reliable. Naveen says Actually I also used prepared statements to avoid SQL injections…but why I am using I don’t know. After I read your explanation I understood…. thanks for your explanation… Payment Gateway Integration says It totally great for many people who interested in java coding. For it will great. And I also used it and I have so many tricks to apply it in a different way. saima khan says Thank You So much. It was helpful. but could you please explain how a hacker can hack the sql code (sql injection). Amit says great explanation ! JHON says IT GOOD FOR POOR PEOPLES LIKE ME fleetwu says It is really helpful. Thank you. Sandeep Kumar says Well done…. n s says Thank you for this write-up sheikh says Plz stop to make people fool u don’t have idea what r u taking about The query will only fired succesfully if it have regular syntax and without matching of parameters it is not posible to get success for hackers also DBMS’s are highly secured databases not like spreadsheeets Danish says Even in the case of statement, password is not showing. He made us fool. Pankaj says Hmm, I am sure you guys have not heard of brute force attacks, DDoS etc. Sometimes “Ignorance is Bliss”. Yadav says Thanks! Very helpful and well described tutorial ammu says Hi Pankaj, You didn’t explain that how hacker can hack the code. Whatever you wrote that is not well explained. Dave says This was helpful, thanks! Shashank says In 2nd query select name, country, password from Users where email = ‘david@gmail.com’ or ‘1’=’1′ and password=”” , password is empty ans we are doing and operation , how it will return row as out put ? David says This does look wrong. I think password should also have some sql injection e.g. pwd’ or ‘1’=’1 because as you point out, the example in the blog post will only print rows where password=””
https://www.journaldev.com/2489/jdbc-statement-vs-preparedstatement-sql-injection-example
CC-MAIN-2019-30
refinedweb
1,439
50.73
Talk:LXC Unprivileged containers section confusing the section about unprivileged containers is confusing, the author creates an "lxc" user and adds subuids/subgids for that user but in fact it seems he's creating/starting the container from a root prompt... if there's no needs to give a user permissions to create/start containers, you don't need to create any lxc user in order to create/start an unprivileged container. all what you need to do is to create subuids/subgids for the root user, add lxc.id_map parameters to container's config and create/start the container as root. moreover, using subuids/subgids 100000-165536 didn't work on my hardened box, but 10000-65536 did. — The preceding unsigned comment was added by Skunk (talk • contribs) 22 February 2016 - Answer - right. With latest edit - this issue are fixed — The preceding unsigned comment was added by Feniksa (talk • contribs) September 12, 2016 Is "MAJOR temporary problems with LXC" section still needed? From what I understand from the linked page, user namespaces are now fully implemented and unprivileged containers are now safe. Couldn't we replace this section with a short description of privileged and unprivileged containers? Vdupras (talk) 15:27, 8 December 2017 (UTC) cgmanager deprecated The cgmanager has become deprecated (see). It is also not working anymore with current systemd builds: As workaround the use of the pam module which ships with LXCFS is suggested, but it looks like this does not work with the current cbuilds of gentoo. configuration files outdated The configuration options on this page are outdated as of lxc 2.1.1
https://wiki.gentoo.org/wiki/Talk:LXC
CC-MAIN-2018-22
refinedweb
269
50.36
How do I get a list of records (like from a database query) in EJB? Created May 4, 2012 sachin mahishi [Basically, you have to cheat. Make a custom data class that is not an EJB. Create a Vector or List containing objects of this type, one per EJB; fill the data objects with the same values as are in the EJBs. Then you can return this List, and it will be serialized through RMI, and magically reconsituted on the client side. -Alex] In your bean: public get Vector getCustomerData() { //Connect to DB //get result set Vector vector = new Vector(); //iterate and build CustomerData object and add to vector return vector; }In your bean's remote interface: public Vector getCustomerData() throws RemoteException;In both client and server: public class CustomerData implements java.io.Serializable { private long id; private String name; public CustomerData(long id, String name) { this.id = id; this.name = name; } public long getId() { return id; } public String getName() { return name; } }In client (for ejb 1.0 lookups....) Context ctx = getInitialContext(); //get initial context for lookup CustomerHome CustHome = (CustomerHome)ctx.lookup("Customer"); Customer customer = CustHome.create(); Vector v=customer.getCustomerData(); // iterate through v
https://www.jguru.com/faq/view.jsp?EID=74357
CC-MAIN-2020-34
refinedweb
194
57.47
', data); When you are in the need to reply on a message, you can use the 'reply' method of sender. forceClient.on("text", (e, sender) { sender.reply("received", ok_data); }); Server Side Import Serverside code for dart force. import 'package:force/force_serverside.dart'; Instantiate a forceserver. ForceServer fs = new ForceServer( port: 9223, startPage: 'start.html' ); Other optional properties that are possible on ForceServer: wsPath: is the websocket path of the server host: is the domain name of your application, by default to localhost port: is the adres port of the application buildPath: is the build path of the application by default this is ../build/web/ startPage: the startpage of the application, the html name that the app needs to use as default root page staticDir: is the public directory where you can put your stylesheets and images); }); }); You can listen when a new Socket connection is been created. fs.onSocket.listen((SocketEvent se) { // socket event });()); You can also annotate a class with the @Receivable annotation, so the system can pick up this class and automatically register these classes. Dart Force mvc access You have access to the force mvc webserver if you do the following: forceServer.server.on(url, controllerHandler, method: 'GET');
https://www.dartdocs.org/documentation/force/0.5.7/index.html
CC-MAIN-2017-13
refinedweb
201
52.6
How to: change currency in datafeed I just got started with Backtrader and I want to start by experimenting a bit with arbitrage concepts in the crypto world. what i'm trying to achieve is to use backtrader to convert data (OHLC) from currency 'X' to 'Y' using historical price data. using the following data (.csv): - daily OHLC in 'EUR' - daily OHLC in 'USD' - weekly 'EUR' to 'USD' (close only) I can easily convert all the data files outside of backtrader but I want to try and do it from within using 'simple' backtrader arithmetic. any ideas on how to make it happen? @Jacob add another set of data that is daily data for forex for the conversion needed. then do arithmetic with the two datasets self.datas[0].closevs self.datas[1].closeto give you the needed outputs. @blonc thanks for your answer. I played around with your suggested solution and it works well from inside a strategy class. From my understanding this way on every bar the conversion is taking place on the fly. but it seem a bit limited, as I cannot override the original data (I can only make the calculation) what i wanted was to change the original data with the new currency before so i can pass the 'new data' to the cerebro instance. this way I can easily apply indicators, plot... @Jacob To better understand how a data feed works maybe read this first CSV Data Feed Development , then move onto this Extending a Datafeed , and everything you want to do should be possible. maybe using 'filters' can be a solution. as like resampling I can change the data before it enter cerebro. this would need to work with 2 datafeeds, to create the data with the new currency. is this a good approach or I'm just bending it completely? (passing the second datafeed in *args or **kwargs). or maybe loading both datafeed (.csv) first with Pandas, playing around with the data and add it to cerebro directly? @jacob gonna be a lot easier to just write a simple csv data feed handler and a lot cleaner. most the code is there to do it, just change as needed.. import the two files, do the conversion and pass it along. this will be totally reusable too for any two files you need. if you are looking for something quick and dirty this bit of code can do literally everything you want. just import the CSV files, do the math in a np.arrayand then pass it to _loadas a iter. you can add params at the top to pass from the strategy side and there you go. class MyCryptoCSVData(bt.CSVDataBase): params = ( ('filepath1', None), ('filepath2', None), ('anything you want', None), ) def start(self): # read in the CSV values, do your math and put the list into a iter() pass def stop(self): # close your read files - backtrader administrators last edited by @jacob said in How to: change currency in datafeed: as I cannot override the original data (I can only make the calculation) You can add a filter to the target data feed, and the filter may have a reference to the data which holds the conversion rate. The values in the target data feed for the filter will be overridden with anything you decide to output. See: Docs - Filters thanks @blonc and @backtrader for your responses. I'm exploring both solutions to learn what works for me best. I really like the idea of using a filter to manipulate the data. but I not sure the solution I have in mind is doable (I want to use as much as backtrader flexibility) so I have 2 sets of data feed: - OHLC of some crypto in X currency - OHLC of X to Y currency I import the 2 csv files and apply the conversion filter to the first one (adding also resample to both just to make it interesting :)) now if I understand correctly after cerebro.run() the system will go bar by bar and apply all the filters one bar at a time. so I will have the corresponding bar for the crypto data but not to conversion rate data. meaning if I pass the conversion data as a param for the main data filter I would have to do some date matching on each bar to get the right rate? or is there a cleaver way to make the 2 data feed bars in sync? Wowwww my mind is completely blown away. I did a little bit of debugging just to find out that the 2 data feeds are actually in sync!!! and on top of it the last value is accessible via [0]. meaning it automatically points to the last conversion rate, just out of the box I'm really loving this library :) I did find in the process something that seem a bit buggy with the dates (working with timestamps). but i need to double check and if so I will open a new post for it. will also post a short code snippet once i'm done testing it. maybe someone can find it handy again thanks @backtrader and @blonc for the help :) after testing and debugging there is some limitation that should be noted: - for both data feed to be properly synced both need to invoke the filter mechanism, otherwise the conversion filter doesn't get any data form the currency rate data. to solve this I resampled the currency rate data feed (which is a filter). - first raw of main data get called prior to currency data so there is no rate reference. making the data not consistence to the conversion - trying to delete the raw to solve problem (2) just make it propagate to the next raw causing problem (2) recursively. so a workaround i used is to change all values to 0 in the main code: data.addfilter(CurrencyConverterFilter, rate = data_rate) filter: class CurrencyConverterFilter(): def __init__(self, data, *args, **kwargs): self.currencyRate = None if 'rate' in kwargs: self.currencyRate = kwargs['rate'] def __call__(self, data, *args, **kwargs): bar = [data.lines[i][0] for i in range(data.size())] if len(self.currencyRate): rate = self.currencyRate[0] # adjust data with new rate bar[data.Open] = bar[data.Open] / rate bar[data.Close] = bar[data.Close] / rate bar[data.High] = bar[data.High] / rate bar[data.Low] = bar[data.Low] / rate # update stream data.backwards(force=True) # remove the copied bar from stream data._add2stack(bar) # bar is ready to be proceesed return False # bar couldn't be converted due to rate data havn't been initialized # can happen if data feed for currency: # 1. was not passed at all # 2. was pass for future date (no rate reference was given yet) # 3. wasn't initialize by the system (can happen in first iteration) #data.backwards() # remove bar from data stack #return True # tell outer data loop to fetch a new bar # unfortunately removing bars cause the sync between data feeds to fail so data need to be removed manualy bar[data.Open] = 0 bar[data.Close] = 0 bar[data.High] = 0 bar[data.Low] = 0 bar[data.Volume] = 0 # update stream data.backwards(force=True) data._add2stack(bar) return False
https://community.backtrader.com/topic/1121/how-to-change-currency-in-datafeed
CC-MAIN-2020-40
refinedweb
1,206
72.66
Using the rJava package on Win7 64 bit with R r xlsx package java_home cannot be determined from the registry package rjava was built under r version 3.5 3 rjava error error: package or namespace load failed for mailr java_home cannot be determined from the registry xlconnect sys.setenv java_home r java 7 download 64-bit I'm trying to install rJava on a computer with Win 7 64 bit. When I run install.packages("rJava") everything seems to be fine: but when I tried to load the library library(rJava) I get the following error:' I have installed the Java JRE (trying both 32 and 64 bit versions) and the JDK (using both 32 and 64 bit). My R version is R version 2.13.1 (2011-07-08) and I'm using RStudio. If I try using 32 bit R, I can load rJava fine. Is the problem just that rJava does not run in R 64 bit? How can I tell if a package is just for 32 instead of 64? edit: just posted)" Using the rJava package on Win7 64 bit with R, I faced same issue with Using the rJava package on Win7 64 bit with R, Java > Sys.getenv("JAVA_HOME") [1] "C:\\Program To use rjava package for R (64-bit) version download and install Java SE Development Kit 8 for Windows x64 from here. Similarly for R (32-bit) version download and install Java SE Development Kit 8 for Windows x86 from here. To build or check R packages with multi-arch you need to install both Java For Windows x64 as well as Java for Windows x86. Getting rJava to work depends heavily on your computers configuration: - You have to use the same 32bit or 64bit version for both: R and JDK/JRE. A mixture of this will never work (at least for me). If you use 64bit version make sure, that you do not set JAVA_HOME as a enviorment variable. If this variable is set, rJava will not work for whatever reason (at least for me). You can check easily within R is JAVA_HOME is set. Using the rJava package on Win7 64 bit with R · Issue #85 · s-u , How to load the {rJava} package after the error "JAVA_HOME cannot be This error is often resolved by installing a Java version (i.e. 64-bit Java or 32-bit Java) that fits to the type of R version that you are using (i.e. 64-bit R or 32-bit R). easily effect Windows 7 users, since they might have installed a version of Java that is Using the rJava package on Win7 64 bit with R #85. Closed englianhu opened this issue Aug 26, 2016 · 2 comments Closed Using the rJava package on Win7 64 bit with R For me, setting JAVA_HOME did the trick (instead of unsetting, as in another answer given here).\\") library(rJava) rJava, This error is often resolved by installing a Java version (i.e. 64-bit Java or 32-bit Java) that fits to the type of R version that you are using (i.e. 64-bit R or 32-bit This problem can easily effect Windows 7 users, since they might Voici quelques conseils rapides sur la façon de démarrer avec R + rJava sur Windows 7 64 bits. Il y a plusieurs possibilités, mais la plupart ont des défauts fatals. Voici ce qui a fonctionné pour moi: Ajouter jvm.dll à votre PATH. rJava, le pont R -> Java, aura besoin de jvm.dll, mais R aura du mal à trouver cette DLL. Il réside dans The last question has an easy answer: > .Machine$sizeof.pointer [1] 8 Meaning I am running R64. If I were running 32 bit R it would return 4. Just because you are running a 64 bit OS does not mean you will be running 64 bit R, and from the error message it appears you are not. EDIT: If the package has binaries, then they are in separate directories. The specifics will depend on the OS. Notice that your LoadLibrary error occurred when it attempted to find the dll in ...rJava/libs/x64/... On my MacOS system the ...rJava/libs/...` folder has 3 subdirectories: i386, ppc, and x86_64. (The ppc files are obviously useless baggage.) How to load the {rJava} package after the error , Check whether R is 32/64 bit with sessionInfo() . Download Java JDK for 32/64 bit. For 64-bit I Install rJava with install.packages("rJava") . Error: package/namespace load failed for 'rJava' j'ai installé le Java JRE (en essayant les versions 32 et 64 bits) et le JDK (en utilisant les versions 32 et 64 bits). Ma version R est R version 2.13.1 (2011-07-08) et j'utilise RStudio. si j'essaie d'utiliser 32 bits R, je peux charger rJava très bien. Sorry for necro. I have too run into the same issue and found out that rJava expects JAVA_HOME to point to JRE. If you have JDK installed, most probably your JAVA_HOME points to JDK. My quick solution: Sys.setenv(JAVA_HOME=paste(Sys.getenv("JAVA_HOME"), "jre", sep="\\")) Installing rJava on Windows 10, Using this package, data of population viability analysis (PVA) generated with the correctly with R. Make sure to install a 64-bit JDK if you are using a 64-bit version of R. A typical installation path for Java-based packages like glmulti in a Windows 7 tutorial), verify on Command Prompt (Win + r, “cmd”, Enter): echo Runtime). For 64-bit Revolution R you need to make sure that you have a 64-bit version of Java installed. Make sure that the Java 'CLASSPATH' variable is set properly. For the 64-bit version of the JDK, this is typically set to something like the following(on Windows): 'C:\Program Files\Java\jdk1.7.0_02\jre\lib\ext' Check your system PATH settings. vortexR, In case you tried loading a package that depends on the {rJava} Install rJava with install.packages("rJava"). Set your JAVA_HOME environment with Sys.setenv(JAVA_HOME="C:/Program Files/Java/jdk-10.0.1/") so that it points to your specific (64-bit in my case) folder that contains the jdk . How to load the {rJava} package after the error , I'm having a problem with the rJava package. I can download it to my machine (Win 7 64-bit) but when I try to load the package into R (2.15.1, This problem can easily effect Windows 7 users, since they might have installed a version of Java that is different than the version of R they are using. Note that it is necessary to ‘manually download and install’ the 64 bit version of JAVA. R help - rJava install, String)" # Use method "concat" to connect character string. this section: - Windows 7: x86_64-w64-mingw32/x64 (64-bit) - R: version 3.0.1 In Java: Oracle SUN Using the rJava package on Win7 64 bit with R. 929. Failed to load the JNI shared Library (JDK) 966. Can't start Eclipse - Java was started but returned exit code=13 - Can you please paste the results of R.Version()to your question? I have just installed the 64bit version of rJavausing the exact method you describe in your question. My guess is that your version of R is 32 bit. - pls share your ans how you solved it. - Adding a comment here as it may help someone the task of going through the whole solution list. If you get the above error, check if your Java is in Program Filesor Program Files (x86). Add path to windows as the accepted answer says. If Program Files (x86), it means you have 32-bit version, follow the link sites.google.com/site/rforfishandwildlifegrads/home/week_2/…. - Rather than leaving the second argument blank, you can just do install.packages('rJava', repos=''). - As pointed out in the answer by @user2161065, loading rJava requires removing the JAVA_HOME variable - Although all this might be needed, even after doing this I had to unset JAVA_HOME as suggested by @user2161065 - +1 Adding ...\jre\bin\serverto path fixed it for me - Where can we find this "Windows PATH" variable? - The previous solutions did not work until I got rid of JAVA_HOME. This is key. - This JAVA_HOME issue should be reported to rJava as a bug. - This really did the trick for me : if (Sys.getenv("JAVA_HOME")!="") Sys.setenv(JAVA_HOME="") library(rJava) Thanks @user2161065 - adding that script to remove the JAVA_HOME var worked for me!
https://thetopsites.net/article/54214140.shtml
CC-MAIN-2021-25
refinedweb
1,424
73.27
Serverless In Just a Few Lines of Code Pulumi makes serverless applications easy by letting you focus on business logic and managing infrastructure in the same familiar language you’re already writing your code in. Any code, any cloud, any language.. Creating a Serverless REST API This example shows how to create a simple REST API that counts the number of times a route has been hit. To implement this API, we need a DynamoDB table, an API endpoint, and a Lambda function. import * as aws from "@pulumi/aws"; import * as awsx from "@pulumi/awsx"; // Create a mapping from 'route' to a count. let counterTable = new aws.dynamodb.Table("counterTable", { attributes: [{ name: "id", type: "S" }], hashKey: "id", readCapacity: 5, writeCapacity: 5, }); // Create an API endpoint. let endpoint = new awsx.apigateway.API("hello-world", { routes: [{ path: "/{route+}", method: "GET", eventHandler: async (event) => { let route = event.pathParameters!["route"]; let client = new aws.sdk.DynamoDB.DocumentClient(); // Get previous value and increment our table entry. let tableData = await client.get({ TableName: counterTable.name.get(), Key: { id: route }, ConsistentRead: true, }).promise(); let value = tableData.Item; let count = (value && value.count) || 0; await client.put({ TableName: counterTable.name.get(), Item: { id: route, count: ++count }, }).promise(); return { statusCode: 200, body: JSON.stringify({ route, count }), }; }, }], }); exports.endpoint = endpoint aws = require("@pulumi/aws"); // A storage bucket. const videos = new aws.s3.Bucket("bucket"); // Trigger a Lambda function when something is added. videos.onPut("onNewVideo", bucketArgs => { console.log(`*** New Item in Bucket`); } // Export the bucket name. exports.bucketName = videos.bucket; Stash info into a document database This example uses a serverless timer that fetches the Hacker News homepage every day at 8:30AM and stashes it into a document database, making use of Pulumi’s ability to reference resources by capturing them inside of serverless lambdas. const aws = require("@pulumi/aws"); const snapshots = new aws.dynamodb.Table("snapshots", { attributes: [{ name: "id", type: "S", }], hashKey: "id", billingMode: "PAY_PER_REQUEST", }); aws.cloudwatch.onSchedule("daily-yc-snapshot", "cron(30 8 * * ? *)", () => { require(" res => { let content = ""; res.setEncoding("utf8"); res.on("data", chunk => content += chunk); res.on("end", () => new aws.sdk.DynamoDB.DocumentClient().put({ TableName: snapshots.name.get(), Item: { date: Date.now(), content }, }).promise()); }) aws from "@pulumi/aws"; // Run a timer every minute: aws.cloudwatch.onSchedule("everyMinute", "rate(1 minute)", async (event) => { console.log(`everyMinute: ${Date.now()}`); }); // Run a timer every minute (cron-style expression): aws.cloudwatch.onSchedule("everyMinuteCron", "cron(0 * * * * *)", async (event) => { console.log(`everyMinuteCron: ${Date.now()}`); }); // Run a timer every day at 7:30 UTC: aws.cloudwatch.onSchedule("everyDay730", "cron(30 7 * * ? *)", async (event) => { console.log(`everyDay730: $ uses an SNS topic to hold a list of website URLs to crawl, and does so everytime a new message arrives. import * as aws from "@pulumi/aws"; import * as fetch from "node-fetch"; const topic = new aws.sns.Topic("sites-to-process-topic"); topic.onEvent("for-each-url", async (event) => { const records = event.Records || []; for (const record of records) { // Fetch the contents at the URL const url = record.Sns.Message; console.log(`${url}: Getting`); try { const res = await fetch.default(url); } catch (err) { console.log(`${url}: Failed to GET`); return; } } }); How Pulumi Works Build - Code in modern languages - Share and reuse patterns - Use your favorite IDE and tools Deploy - Preview changes - Run pulumi upto deploy - Integrate with CI/CD Manage - Audit all changes - Manage complex environments - Implement policies and controls Use Pulumi's open source SDK to create, deploy, and manage infrastructure on any cloud. Need Help? Need technical help? Use our Support Portal to get in touch. Want to speak with sales? Something else on your mind? Send us a note.
https://www.pulumi.com/serverless/
CC-MAIN-2022-21
refinedweb
595
53.88
Constants are fixed value variables, whose value cannot be altered throughout the execution of program. The fixed value is known as literal. You can define a constant for any type. They behave like normal variables expect that they are readonly (once assigned cannot be modified). C supports two styles of constant definition. But before we learn to define constants, let us identify its need. Need of constant variable? To understand the need of a constant variable let us suppose a situation. Consider the below program to calculate area of circle. #include <stdio.h> int main() { float radius, pi, area; pi = 3.14159; radius = 12; area = pi * radius * radius; printf("Area = %f", area); return 0; } In above program we declared and used pi as a normal variable. We can easily modify the value of pi anywhere in program. In bigger programs you may accidentally alter the value of pi which would be a nightmare for expressions dependent on pi. Since PI is a mathematical constant whose value is fixed. Declaring constants restrict programmers to alter its value. The compiler looks for change in the constant variable and report errors if found. Let us improve our program using a constant variable. #include <stdio.h> int main() { const float PI = 3.14159; float radius, area; PI = 3.14; // <-- Will generate error, it must not be modified radius = 12; area = PI * radius * radius; printf("Area = %f", area); return 0; } This time the compiler detects modification in a constant variable and generates an error message Assignment of a read-only variable 'PI'. Defining constant using const keyword A variable declared with const keyword is marked as readonly. The compiler looks for modification of readonly variables and report errors if found. Syntax to define constant using const const <data-type> <constant-name> = <constant-value>; Or <data-type> const <constant-name> = <constant-value>; Example to define constant using const const float PI = 3.14159f; float const e = 2.71828f; Defining constant using #define #define is a pre-processor directive used to define constants and macros. It defines compile time constant and guarantees 100% constantness. Unlike const keyword it does not define any variable and doesn't consumes memory. Rather during the compilation process, the compiler replaces all occurrence of the defined constant with its literal value. Hence, it is also called as compile time constant. Syntax to define constant using #define #define <constant-name> <constant-value> Example to define constant using #define #include <stdio.h> #define PI 3.14159 // <-- Define constant PI int main() { float radius, area; printf("Enter radius: "); scanf("%f", &radius); area = PI * radius * radius; printf("Area = %f", area); return 0; } #define PI 3.14159 defines a constant PI with value 3.14159. The pre-processor replaces all occurrence of PI with 3.14159 before compilation. Important note: It has been a legacy and also considered as a good programming practice to declare constant variable names in ALL CAPS. <pre><code> ----Your Source Code---- </code></pre>
http://codeforwin.org/2017/08/constants-c-programming.html
CC-MAIN-2017-47
refinedweb
491
58.89
Awesome Array Driver - arenaudineau/AwesomeArray-PythonDriver Wiki Welcome to the AwesomeArray-PythonDriver wiki! Installation verification By following the quick installation guide from README, you should be able to import the library wherever you want: import aad Several errors can show up: Missing aad ModuleNotFoundError: No module named 'aad' The library has not been correctly installed. [...] Missing B1530driver ModuleNotFoundError: Failed to import B1530driver. Please make sure that the B1530driver files are in 'extlibs/B1530Driver'. The two licenced files B1530driver.py and B1530ErrorModule.py are missing, you must provide them in aad/extlibs/B1530Driver, aside the DRIVER_SCRIPTS_HERE file. Missing wgfmu DLL error occurs with wgfmu.dll wgfmu.dll failed to load, it must be in C:\B1530A_InstLib\. Alternatively, modify its path in aad/extlibs/B1530Driver/B1530driver.py with the variable wgfmu_link. If it still fails to load, then ¯\_(ツ)_/¯ Initialization driver = aad.AwesomeArrayDriver() Again, several errors may occur: Microcontroller not found Exception: µc not found, please verify its connection or specify its PID Make sure the microcontroller is connected. There are two USB port on the board: PWRis used to power and program it and is NOT used by the driver, it may be connected to the PC the driver is run from, or another one ; USERis used for serial communication and IS used by the driver, it must be connected to the PC the driver is run from ; By default, the driver is looking for a USB device with a Product IDentifier ( PID): 22336. For some reason the board may not have this one. You can either change it by reprogramming the microcontroller (in STM32Cube, in the project, ioc file, search for USB_DEVICE on the left panel, go down, Device Descriptor panel, PID (Product IDentifier)) or provide it to the driver: aad.print_ports() will show every device connected and its associated PID. (⚠️ This is NOT STMicroelectronics STLink Virtual COM Port) You can then use: driver = aad.AwesomeArrayDriver(uc_pid={correct_pid}) Microcontroller port already open serial.serialutil.SerialException: could not open port '{PORT}': PermissionError(13, 'Access is denied.', None, 5) The serial connection is already opened somewhere else, check if there are no other instance of AwesomeArrayDriver running (for example in another Jupyter Notebook ; you may want to interrupt/restart the kernel) Lost microcontroller connection serial.serialutil.SerialException: WriteFile failed (PermissionError(13, 'The device does not recognize the command.', None, 22)) This error will not happen right after the initialization but can appear when using it. It means the USB cable have been plugged out or that the connection has timed out. B1530 not found Exception: -5: Error in WGFMU_openSession("GPIB0::18::INSTR"); viReadSTB returned -1073807304. Check the connection or that EasyEXPERT on the B1500A has been closed. B1530 already open Exception: -3: Error in WGFMU_openSession("GPIB0::18::INSTR"); A session has already been opened by WGFMU_openSession("GPIB0::18::INSTR"). The connection is already opened somewhere else, check if there are no other instance of AwesomeArrayDriver running (for example in another Jupyter Notebook ; you may want to interrupt/restart the kernel) Usage For now, the driver configures by default the Awesome Array, and can only be used, in CARAC mode. A memristor can be addressed by three parameters: col: its column index in the Test Array ; row: its row index in the Test Array ; bar: boolean to adress the complementary memristor if True [False by default] On each of them you can perform the operations: form, set, reset and read. The three first do not return anything, the last one returns the memristor value in Ω ohm. Words in the Awesome Array are 64 bits long (64x64 * 2 = 8 192 memristors). The constant aad.SR_WORD_SIZE is set to this value for more explicit code. Examples """ Forms each memristor ⚠️ it takes several **minutes** to run, the bottleneck being the calls to the B1530 underlying driver """ for col in range(aad.SR_WORD_SIZE): for row in range(aad.SR_WORD_SIZE): driver.form(col=col, row=row, bar=False) driver.form(col=col, row=row, bar=True) """Toggles the complementary memristor at column 6, row 42""" RES_LIMIT = 5e3 res = driver.read(col=6, row=42, bar=True) print("Before toggle:", res) if res > RES_LIMIT: driver.set(col=6, row=42, bar=True) else: driver.reset(col=6, row=42, bar=True) res = driver.read(col=6, row=42, bar=True) print("After toggle:", res)
https://github-wiki-see.page/m/arenaudineau/AwesomeArray-PythonDriver/wiki/Awesome-Array-Driver
CC-MAIN-2022-27
refinedweb
718
55.54
Represents a directory stream. More... #include <DirHandle.h> Represents a directory stream. An opendir function returns objects of this type. The core functions are read and seek, but only a subset needs to be provided. If a FileSystemLike class defines the opendir method, then you can access the directories of an object of that type by either: or to open the root of the file system. The root directory is considered to contain all FileHandle and FileSystem objects, so the DIR pointer returned by opendir("/") reflects this. Definition at line 55 of file DirHandle.h. Get the sizeof the directory. Definition at line 93 of file DirHandle.h.
https://os.mbed.com/docs/mbed-os/v6.15/mbed-os-api-doxy/classmbed_1_1_dir_handle.html
CC-MAIN-2022-05
refinedweb
108
68.06
Samuel Tissot-Jobin7,350 Points Why not use php namespace capabilities instead of long weird function names like using: <?php namespace treehouse\wordpress; function theme_styles(){} ?> 1 Answer Minh Duc Nguyen8,522 Points I think it's very useful to do that, although it would require students to understand namespacing and the syntax of loading a namespaced function. Much like this add_action('wp_enqueue_style', 'treehouse\\wordpress\\theme_styles'); Although I would prefer to use PHP magic NAMESPACE constant in place of that, so add_action('wp_enqueue_style', __NAMESPACE__.'\\theme_styles'); This is explained much better by this StackOverflow post: And I'm intrigue in using namespaced classes with hooks all grouped inside the class constructor
https://teamtreehouse.com/community/why-not-use-php-namespace-capabilities-instead-of-long-weird-function-names
CC-MAIN-2020-10
refinedweb
109
51.58
On windows, assuming imports are free: import winsound winsound.Beep(220*((2**(1/12.0))**7), 2000) But that’s just because I like to tune to E. If you prefer a more “natural looking” note, you can use A: winsound.Beep(110, 1000) But why choose at all when you can go for all of them? [winsound.Beep(220*((2**(1/12.0))**i), 500) for i in [7, 2, -2, -7, -12, -17]] Nice! But it’s not a deaf-friendly guitar tuner as the FFT tuner you wrote a few years ago :) Got a new project for ya: check the influences of this one-liner on mosquitoes. Pingback: 用Python奏出吉他声 | Code之行人
http://www.algorithm.co.il/blogs/programming/python/one-liner-guitar-tuner-in-python/
CC-MAIN-2017-43
refinedweb
113
83.36
A new MP has shipped for Configuration Manager 2007, which is MP version 6.0.6000.3 This MP update will upgrade your existing MP. It was updated to resolve the top customer issues with this management pack. The following changes, also covered in the updated MP guide, are in version 6.0.6000.3. This is a big deal, this caused a lot of alert noise to repeat, even for status messages that no longer existed! I wrote about this previously at the following link – and provided an addendum MP which contained overrides to disable these consolidation rules. My addendum MP is no longer necessary and can be removed, if you were running it: Want to drastically quiet down your ConfigMgr 2007 MP- If you were running the Post-R2 LocalizedText cleanup scripts on a regular basis to control the size of your localizedtext table – this should no longer be necessary. I wrote more about that here: Clean up LocalizedText This caused a lot of false alerts and script errors previously. Big fix. This was a complaint which caused the topology diagrams to not populate. Now ALL alerts expose overrides for priority and severity, which is critical for your tuning for notifications. Which means you can override and add diagnostics/recoveries to anything. Sweet update! Recommend you get this one – and review your existing overrides to ensure they are still needed. Kudos to the support teams for pushing this through. Read the official announcement here: Known issues in this MP: 1. If your ConfigMgr servers are running on the 64bit Server 2008 R2 platform, you might see alerts about failing to read %PROCESSOR_ARCHITECTURE:. This is an OS issue – with the WMI Win32_Environment class. The scripts will work most of the time but randomly you will see failures of the scripts due to the inability to read from Win32_Environment There is a hotfix available – you will need to open a support case with Microsoft to request this until it is available externally or included in a service pack. The KB number to request is KB2692929 Have you removed "Want to drastically quiet down your ConfigMgr 2007 MP"? The link doesnt work? Does this also work for 6.0.6000.3? @Maekee Yes - I deleted that post since I didnt want people to download it anymore - as it no longer applies. It is not necessary for this new version Hi Kevin After we imported the new MP for SCCM - we are getting a lot of Alert with "Script error." e.g. this one: Commen for all of them is this: Is this a known error? @RHC - I have been hearing that one. Are you running SCCM on 64bit Server 2008R2 by chance? We have the same %PROCESSOR_ARCHITECTURE error. The error seems to be random - i.e. not every time the script runs. I have verified the namespace, class and value exist on all our servers. We are running on Server 2008 R2. Perhaps this is due to a timeout querying the value which the script does not handle correctly? Hi, did you find a solution for the "%PROCESSOR_ARCHITECTURE error."? we have the same problem, windows 2008 r2 with config manager 2007 SP2 R3... thanks Adrian Hi, Quite a long thread about this issue on SC Central, no solution though. I'm also experiencing the problem, Server 2008 R2, Config Mgr 2007 SP2 Hi all, Just wondering if there is a solution to the %PROCESSOR_ARCHITECTURE issue? I'm getting 700+ alerts with high repeat counts since the upgrade. Running Server 2008 R2 SP1 with SCCM R3 SP2. Cheers, Leigh. I'm getting the same issue here too... Just out of curiosity shouldn't the variable be '%PROCESSOR_ARCHITECTURE%' rather than having a missing trailing percent symbol? Maybe that's why it can't be read? Looking at the script (in the C:\Program Files\System Center Operations Manager 2007\Health Service State\Management Packs\Microsoft.SystemCenter.ConfigurationManager.2007.{15B57583-68DE-902F-D349-2D4718F3A7A2}.{2C1F473C-4D0F-85B4-EA00-C36190EEDEFF}.xml file) I can see that the error is just a typo in the error string. Interestingly there is a section commented out with "Removing use of unreliable CheckWow64RegistryKeyExists function" and a bit underneath "Adding this instead" where the error is being generated from I've recommissioned the 'unreliable' CheckWow64RegistryKeyExists function to see if the error goes away. J. The above file had a bunch of sections with the "Removing use of unreliable CheckWOW64RegistryKeyExists function" lines in them. On the OpsMgr central server I edited the file replacing each section with this: 'Removing use of unreliable CheckWow64RegistryKeyExists function If (CheckWow64RegistryKeyExists() = True) Then REG_PATH_SMS = "SOFTWARE\Wow6432Node\Microsoft\SMS" Else REG_PATH_SMS = "SOFTWARE\Microsoft\SMS" End If 'Adding this instead 'On Error Resume Next 'Dim sCimv2namespace, sProcArchQuery, oProcArchObjectSet, oProcArchObject, sProcArch 'sCimv2namespace = "winmgmts:\\.\root\cimv2" 'sProcArchQuery = "Select * From Win32_Environment Where Name = ""Processor_Architecture""" 'Set oProcArchObjectSet = WMIExecQuery (sCimv2namespace, sProcArchQuery) 'Set oProcArchObject = oProcArchObjectSet.Item("Win32_Environment.Name=""PROCESSOR_ARCHITECTURE"",UserName=""<SYSTEM>""") 'sProcArch = oProcArchObject.VariableValue 'On Error GoTo 0 'Select Case sProcArch 'Case "AMD64" ' REG_PATH_SMS = "SOFTWARE\Wow6432Node\Microsoft\SMS" 'Case "x86" ' REG_PATH_SMS = "SOFTWARE\Microsoft\SMS" 'Case Else ' ScriptError "read %PROCESSOR_ARCHITECTURE environment variable from Win32_Environment WMI class." 'End Select Then deleted the contents of the "C:\Program Files\System Center Operations Manager 2007\Health Service State\Management Packs" folder on the clients and restarted the System Center Management service. Problem has now gone and no more have surfaced (so far) so I presume that the script is now working reliably on my servers. __________ boredazfcuk same issue here... ' ScriptError "read %PROCESSOR_ARCHITECTURE environment variable from Win32_Environment WMI class." we have an open case on this, but now resolution till now. seems to be another WMI bug... meanwhile will try the workaround check registry instead of wmi... another issue could be fixed for us - we had problems with the time zone settings: alert: Collection Evaluation task 'xxxx'' is still running. The task started at 'xxxxx' and was expected to complete within '300' seconds. It has now run for '3601' seconds. There is a script error how to handle summer/wintertime! best regards, peter On the "%PROCESSOR_ARCHITECHTURE%" issue - we are pretty sure we have the root cause nailed down to a WMI issue with how Win32_Environment queries work.... it is a random issue. The script works most of the time but you do end up with some of these. It comes down to multiple scripts running at the same time hitting this same namespace. I am working to identify which scripts call on this, and how to create a strategy to limit them stepping on each other as a short term workaround. I just set up an SCOM 2007 R2 instance with the 6.0.6000.3 MP monitoring my SCCM server, and I've gotten a couple of the: Errors, on PXE and and NLB Management point (we aren't using any NLB). The Collection Evaluation error messages: Current Task Duration (seconds): 3603 Maximum Task Duration (seconds): 300 Are also occurring--glad to know that my rules are fine and it's just daylight savings time throwing things off. Looking forward to a MP update (or patch/fix) for these since I'm not so keen on manually editing the MP scripts on the servers right now. @Kevin Do you have any update on "%PROCESSOR_ARCHITECHTURE%" issue?
http://blogs.technet.com/b/kevinholman/archive/2011/09/30/mp-update-new-configmgr-2007-mp-version-6-0-6000-3-resolves-top-issues.aspx?PageIndex=1
CC-MAIN-2015-18
refinedweb
1,208
54.73
This documentation is archived and is not being maintained. Hey, Scripting Guy! Desktop Management from beyond the Grave Hint: the final script starts out with a full path to a file then parses out and displays just the file name. The Microsoft Scripting Guys Download the code for this article: HeyScriptingGuy2007_11.exe (151KB) Due to overwhelming popular demand, we thought we'd do something a little different this month: rather than start out by talking about system administration scripting, we're going to start out by—cue the ominous music—telling a ghost story instead! Note: OK, so technically, if we really wanted to do something a little different this month we'd start out by actually talking about system administration scripting for a change. But just play along with us, OK? Thanks! Many years ago, one of the Scripting Great-Great-Great Grandmothers passed away. Shortly after Grandmother was laid to rest in her simple wooden coffin, Grandfather began having terrible nightmares, nightmares in which his beloved wife was desperately trying to claw her way out of the grave. After repeated nightmares, and after repeated entreaties, Grandfather finally convinced the local authorities to exhume the body. When the coffin was opened, everyone was horrified to see that Grandmother's nails had been bent back and the inside of the coffin was covered with scratches! OK, so this story might not be entirely true; in fact, the more we think about it, the more we realize that it isn't the least bit true. Nevertheless, the story has an important lesson to teach us. We have no idea what it is, but it's in there somewhere. Wait a second, now we remember! Coffins were originally designed to protect the deceased from the elements and to help prevent the body from decomposing. Unfortunately, coffins had an unintended consequence: they make it possible, in theory anyway, for you to bury a person alive and ensure that he can never get out. As the story of the Scripting Great-Great-Great Grandmother clearly shows, even the best-laid plans of mice and men can result in disaster, and in people being buried alive! (Cue the ominous music one more time.) Note: Unless, of course, you opt for the Improved Burial Case, invented by Franz Vester in the 1860s. This coffin included a string connected to a bell that remained aboveground; in the case of premature burial, the "deceased" could simply ring the bell and summon help. The Improved Burial Case also included a folding ladder, although it's not entirely clear to us how a folding ladder will help you escape from a coffin buried six feet underground. If you happened to be buried on top of someone's garage, sure, a folding ladder would be useful. Otherwise .... This very same thing (the thing about best-laid plans leading to disaster) is true of Internet firewalls. (Well, sort of.) Firewalls were originally designed to keep the bad guys out: they block incoming network traffic, which helps to keep hackers and intruders away from your computers. That's great, but—like the problem of being buried alive—there's also an unintended consequence here: firewalls can also keep the good guys out. This is especially true in the case of Windows® Management Instrumentation (WMI), which relies on DCOM to perform administrative tasks on remote computers. Firewalls tend to block all incoming DCOM traffic, something that makes it very difficult (if not downright impossible) to programmatically manage computers over the Internet. In fact, without opening up additional ports on the firewall and thus making you more vulnerable to hackers and crackers, this is downright impossible. Unless, of course, you opt for WinRM: Windows Remote Management (folding ladder not included). What Is Windows Remote Management? According to the WinRM SDK (msdn2.microsoft.com/aa384426), Windows Remote Management is "the Microsoft implementation of WS-Management Protocol, a standard SOAP-based, firewall-friendly protocol that allows hardware and operating systems from different vendors to interoperate." Impressive, huh? We're not going to discuss the details of the WS-Management Protocol in this month's column, so we recommend reading the WinRM SDK for the details. For now, all we care about is that WinRM is available on Windows Server® 2003 R2, Windows Vista®, and Windows Server 2008, and that it enables you to manage computers over the Internet. WinRM does this using port 80, a standard Internet services port that most firewalls leave open. (However, the port used by WinRM and the default transport mechanism, HTTP, can be changed as needed.) We also won't spend any time in this month's column discussing how to install and configure WinRM. There's already plenty of information available to help you with that (msdn2.microsoft.com/aa384372). However, we will take a moment to emphasize one important point: if you want to use WinRM to retrieve information from a remote computer (which, of course, is the primary reason for using WinRM in the first place), then both your local machine and the remote computer must be running WinRM. What does that mean? Well, it means that if you haven't upgraded your client computers to Windows Vista (say it isn't so!), or you haven't upgraded your servers to Windows Server 2003 R2 or Windows Server 2008, you won't find WinRM to be particularly useful, at least not today. Needless to say, however, that probably won't be the case tomorrow. (And, of course, assuming your firewall allows it, you can always use WMI and DCOM to manage remote computers.) Returning All the Properties and Instances of a Class But who cares about caveats and disclaimers, right? Instead of all that mumbo jumbo, let's see if we can figure out how to write a script that takes advantage of WinRM. Coincidentally enough, we just happened to have a simple little script that, using the HTTP protocol and port 80, connects to a computer named atl-fs-01.fabrikam.com and then returns complete information about all the services installed on that computer. See Figure 1 for the script in all its glory. strComputer = "atl-fs-01.fabrikam.com" Set objWRM = CreateObject("WSMan.Automation") Set objSession = objWRM.CreateSession("http://" & strComputer) strResource = "" Set objResponse = objSession.Enumerate(strResource) As you can see, we start out by assigning the DNS name of the computer (atl-fs-01.fabrikam.com) to a variable named strComputer. Alternatively, we could make the connection using the computer's IP address (or even its IPv6 address). For example: After assigning a value to strComputer we next create an instance of the WSMan.Automation object, then we call the CreateSession method to connect to the remote machine, in this case using the HTTP protocol (just like we said we were going to do): As we noted, we want to return information about the services installed on the remote computer. In addition, and at least for this first example, we want information about all the properties of all the services. What does all that mean? That means that we need to specify a URI Resource that binds us to the Win32_Service class on the remote computer: Granted, that's not the prettiest URI we've ever seen. (Although, come to think of it, we're not sure we've ever seen a pretty URI.) Fortunately, though, most of the URI is boilerplate; all you need to worry about is the WMI path at the very end: That should be pretty straightforward. What if you wanted to connect to the root/cimv2/Win32_Process class? Well, then you just modify the URI path accordingly: Interested in the root/default/SystemRestore class? Well, once again, just modify the URI class, taking care to specify the default namespace (rather than the cimv2 namespace): And so on.... It's a bit of a shame that you need to include the portion of the URI as well, but.... At this point we're ready to get back some data. To do that, we simply call the Enumerate method, passing the variable strResource as the sole method parameter: Will that line of code truly populate objResponse with information about the services installed on the computer atl-fs-01? You bet it will. However, unlike standard WMI scripts, you don't get back a series of objects, each with its own properties and property methods. Instead, you'll get back a big old XML blob that looks a little like what you see in Figure 2. <p:Win32_Service xmlns:<p:AcceptPause>false</p:AcceptPause><p:AcceptStop>false</p:Acce ptStop><p:Caption>Windows Media Center Service Launcher</p:Caption><p:CheckPoint >0</p:CheckPoint><p:CreationClassName>Win32_Service</p:CreationClassName><p:Desc ription>Starts Windows Media Center Scheduler and Windows Media Center Receiver services at startup if TV is enabled within Windows Media Center.</p:Description ><p:DesktopInteract>false</p:DesktopInteract><p:DisplayName>Windows Media Center If you're an XML whiz, that's no big deal; anyone familiar with XML should be able to parse and output this information without too much trouble (even though, in the words of the WinRM SDK, this information is not in "human-readable format"). But what if you're not an XML whiz? In that case, you have two choices. One, you can wait until next month, when we'll show you a few tricks for working with WinRM's XML. Or two, you can do what we did in our sample script: employ the XSL transform that gets installed alongside WinRM. The XSL What-form? An XSL transform is nothing more than a template that describes how an XML file should be displayed. A complete discussion of XSL files goes way beyond what we can do in this month's column—for that matter, even a cursory discussion of XSL files goes way beyond what we have the capacity to do in this month's column. Therefore, we won't try to explain how WsmTxt.xsl (the name of the built-in transform) actually works. Instead, we'll simply show you how you can use that transform in your script. When you call the Enumerate method, WinRM sends back a stream of XML data. The easiest way to work with this data is to set up a Do Until loop that continues to run until you reach the end of the data stream. That's what we do here: As you can see, inside our loop we call a subroutine named DisplayOutput. When we call that subroutine, we pass along the value of the stream's ReadItem method as the subroutine parameter. (As this whole approach implies, the XML stream is sent back in separate pieces rather than as one large blob of data. Our script, in turn, reads the XML data one piece, or one item, at a time.) Meanwhile, the DisplayOutput subroutine looks like this: In brief, we start out by creating two instances of the MSXml2.DOMDocument.3.0 object. We load the XML data stream (strWinRMXML) into one object, then we load the XSL file (WsmTxt.xsl) into the other object. At that point we call the TransformNode method to use the information in the XSL file to format and display the data grabbed from the XML stream. Yes, it's a little confusing. But at least the output (while far from perfect) is a bit easier to read (see Figure 3). Win32_Service AcceptPause = false AcceptStop = true Caption = User Profile Service CheckPoint = 0 CreationClassName = Win32_Service Description =. Like we said, this is good, but it's not necessarily great, which is all the more reason to tune in next month, when we'll show you a few ways to manipulate the XML output by yourself. Returning Selected Instances and Properties of a Class Needless to say, this is all very cool, except for one thing: it might not be fully reflective of the way you typically do your job. Yes, there will be times when you want to return all the properties of all the instances of a class; however, there will also be times (perhaps many more times) when you want to return only selected properties or instances of a class. For example, you might want to return only information about services that are running, something you do in a regular WMI script by using code similar to this: That's nice. But how in the world are you going to modify your Resource string to make it equivalent to that? Well, to be perfectly honest, you aren't going to modify your Resource string to make it equivalent to the ExecQuery statement. You will definitely need to modify the Resource string, but you'll need to do a few other things as well. With that in mind, let's take a peek at Figure 4. This is a WinRM script that returns information about the services that are running on a computer (as opposed to all the services installed on that computer). strComputer = "atl-fs-01.fabrikam.com" Set objWRM = CreateObject("WSMan.Automation") Set objSession = objWRM.CreateSession("http://" & strComputer) strResource = "*" strFilter = "Select * From Win32_Service Where State = 'Running'" strDialect = "" Set objResponse = objSession.Enumerate(strResource, strFilter, strDialect) At first glance this might appear all but identical to the first WinRM script we showed you; however, there are some very important differences. For one thing, take a look at the value we assign to the Resource string: Notice that, when writing a filtered query, we don't specify the actual name of the class (Win32_Service) that we want to work with; instead, we simply connect to the namespace (root/cimv2) where that class resides. Just remember to tack the asterisk (*) onto the end. If you don't, the script will fail with the resulting message "... the class name must be '*' (star)," which simply means that you need to make the class name *. In addition, we also need to define a Filter and a Dialect: The Filter should be easy enough to figure out; that's where we put our Windows Management Instrumentation Query Language (WQL) query ("Select * From Win32_Service Where State = 'Running'"). The Dialect, meanwhile, is the query language used when creating the Filter. At the moment there's only one query language allowed: WQL. Nevertheless, the Dialect must be specified or the script will fail with the note that the "filter dialect ... is not supported." Note: interestingly enough, the error message suggests that you remove the Dialect when you call the Enumerate method. That's a recommendation that you shouldn't follow. When doing a filtered query, the Dialect must be specified and must be WQL. Period. The only other change we need to make occurs when we call the Enumerate method. At that point we need to pass along the variables representing the Filter (strFilter) and the Dialect (strDialect) as well as the variable representing the Resource (strResource): Give that a try and see what happens. Now, what about returning only selected properties of a class? For example, suppose you are interested in returning just the Name and DisplayName of all the services running on a computer. What then? Well, in a case like that you can try to manipulate the XML so that only the Name and DisplayName are displayed. That's possible, but definitely a bit tricky. An easier way to do this is to specify just those properties when assigning the Filter: Do that and all you'll get back are the Name and DisplayName of each service, like so: Granted, the formatting is a bit goofy. (What is the deal with that XmlFragment stuff?) All the more reason to tune in next month. Wait for It With any luck, that should be enough to get you started with the wild and wonderful world of WinRM. Of course, we couldn't end a column on WinRM without mentioning the "waiting mortuaries" that were once prevalent throughout Germany. In cities with waiting mortuaries, corpses weren't immediately buried. Instead, they were placed in warm rooms with a number of strings and wires attached to their fingers and toes. The idea, of course, was that the slightest movement would trigger an alarm and summon help. The bodies were kept in these waiting mortuaries until it became evident that these people were truly beyond hope and would never do anything ever again. Now that you mention it, a waiting mortuary is very similar to being assigned to the Scripting Guys team, isn't it? Of course, no one ever came back to life in any of the waiting mortuaries, whereas the people assigned to the Scripting Guys team ... Dr. Scripto's Scripting Perplexer In June 2007, the Scripting Guys attended the Tech•Ed 2007 conference in Orlando, Florida. Not content to just attend the conference, we decided to have some fun. Not only that, but we thought everyone else could use some fun, too. With that in mind we came up with Dr. Scripto's Fun Book, a booklet filled with scripting-related puzzles and various other bits of information. We teamed up with TechNet Magazine—meaning we talked them into giving up a tiny corner of their booth in the Expo Hall—and handed out Fun Books to anyone who wandered by. As it turned out, the Fun Book was a pretty popular item (maybe not as popular as the Dr. Scripto bobblehead dolls, but close). The fine and opportunistic people at TechNet Magazine, seeing one more way they could capitalize on the Scripting Guys's success (since the Scripting Guys never seem to be able to capitalize on their own success), asked us to create some puzzles for them. When Scripting Guy Jean Ross turned her back for just a moment, Scripting Guy Greg Stemp said, "Sure, we'll do that!" And here we are: Dr. Scripto's Scripting Perplexer. Enjoy. Drop-In Scripting In this puzzle, all the letters in the top section unscramble to create a script (in VBScript). But don't worry, you don't have to unscramble the whole thing; instead, you simply need to unscramble one column at a time. The letters in each column of the top section fill in the blank spaces in the same column of the bottom section. Here's an example: As you can see, in column 1 we have the letters S, C, and T. Those three letters belong in the grid below in some unknown order. But when all the letters are dropped down in the proper order, the bottom grid—read left-to-right—becomes something logical. Take a look at the solution: You can see that the letters S, C, and T in column 1 move down in the order T, S, and C. These turned out to be the first letter in each word of "The Script Center." The actual puzzle is a little more difficult because, well, it's longer and because the final result is a full script. Good luck! The Microsoft Scripting Guys work for—well, are employed by—Microsoft. When not playing/coaching/watching baseball (and various other activities) they run the TechNet Script Center. Check it out at. Show:
https://technet.microsoft.com/en-us/library/f55b9cf4-4b6f-4c9d-a9f3-65d1f5960dc3
CC-MAIN-2017-43
refinedweb
3,198
61.26
Painless JavaScript Using Prototype Prototype is an object oriented JavaScript library (written by Sam Stephenson and friends) that makes JavaScript fun. So it says on the site, anyway. Those of you who are familiar with the open source community’s latest and greatest application framework, Rails, may recognise Prototype as it actually forms the backbone of Rails’ JavaScript helper. However, Prototype can be used independently of Rails to aid the coding of many JavaScript doodads and Web 2.0 thingy wangles.. First, we’ll examine the basic building blocks of Prototype: its $ functions, its additions to the String, Number, Array and Function objects, its form handling capabilities and its DOM functions. Then, we’ll move on to look at Prototype’s well-known AJAX helpers. Finally, we’ll finish with a brief discussion of other interesting projects that are based on it. I’d like to note that the current stable version of Prototype at time of writing is 1.4.0. I have a feeling that the library will change quite quickly in response to Rails’ lightning-quick development cycle, so things will change. The final boring note is that at this time Prototype only supports the newer browsers — as you might expect of a DOM and XMLHttpRequest based library. See the Prototype site for details of browser support. Getting Started The latest version of Prototype can be downloaded from the prototype site. Simply download prototype.js and link it to your pages with a <script> tag: <script type="text/javascript" src="path/to/prototype.js"></script> If you’re on Rails, you don’t need to download Prototype: it’s included in the distribution. You can include it into your views by putting this into the <head> of your pages: <%= javascript_include_tag 'prototype' %> Now, let’s get into it! Prototype’s Little Helpers One of the really nice things about using Prototype is the deadly simple helper functions that it provides for very common scripting tasks. The $ function has already been getting some attention. Give it one or more element IDs, and it’ll return references to them: // reference to the element with the ID 'nav' $("nav") // an array of element references $("img1", "img2", "img3") It’s like a souped-up document.getElementById and it’s amazing how much more convenient coding seems when you use it. Another incredibly useful function is document.getElementsByClassName, which does what it says on the tin: it takes a CSS class name and returns a list of all elements with that class: // all elements with class 'navlink' document.getElementsByClassName("navlink") // all elements with class navlink and inside the element with ID 'nav' document.getElementByClassName("navlink", $("nav")) Also, as this article was being written, Prototype version 1.5.0_rc0 gained the powerful $$ function, which allows you to select elements using standard CSS selector syntax: // an array of all input elements inside 'commentform' $$("#commentform input") // an array of all links with the class 'external' $$("a.external") Please note that, at the time of writing, unless you download the very latest version of Prototype from Subversion, this function won’t be available to you. $Ftakes an ID and returns the value of any form field, for instance, a select box like this: <select name="country" id="country"> <option selected="selected" value="UK">United Kingdom</option> <option value="FR">France</option> ... $F('country') // 'UK' Making JavaScript Suck Less Oops, I've stolen another JavaScript library's tag line. JavaScript library developers just can't seem to keep from trying to make JavaScript be like another language. The Mochikit guys want JavaScript to be Python, countless programmers have tried to make JavaScript like Java, and Prototype tries to make it like Ruby. Prototype makes extensions to the core of JavaScript that can (if you choose to use them) have a dramatic effect on your approach to coding JavaScript. Depending on your background and the way your brain works, this may or may not be of help to you. OO the Ruby(ish) way: Class.createand Object.extend The Class.createmethod allows you to define classes in a more Ruby-like way, although this is purely aesthetic as it essentially just calls the initialize method you define as the constructor, rather than the taking the traditional JavaScript approach of creating objects with constructor functions. var DOMTable = Class.create(); DOMTable.prototype = { initialize : function(el) { this.el = el; }, ... } However, much more powerful is the stupidly simple but effective Object.extend method. All it does is copy one object's properties and methods to another object, but its uses are many. Here's a quick taster: // make a (shallow) copy of obj1 var obj2 = Object.extend({}, obj1); var options = { method : "post", args : "" }; // merges in the given options object to the default options object Object.extend(options, { args : "data=454", onComplete : function() { alert("done!"); } }); options.method // "post" options.args // "ata=454" options.onComplete // function() { alert("done!"); } It's most commonly used to "mix in" methods from one object with another. For instance, you could create a set of functions that make certain DOM elements sortable: var Sortable = { sortBy : function(func) { ... }, sortByReversed : function(func) { ... }, reset : function() { ... } }; Then, if we wanted to make our DOMTablefrom above sortable, we could mix in these methods to the DOMTableobject: var myTable = new DOMTable("table-id"); Object.extend(myTable, Sortable); Now we can call those methods on the table: // sort the table using the given function myTable.sortBy(function (itemA, itemB) { ... }); Function Binding Prototype also adds to the Function object two really useful methods: bindand bindAsEventListener. These are used mainly to bind a function to a particular object so that the thiskeyword points to that object. This is incredibly useful when you're setting event handler functions. Imagine you try something like this: var myObject = new Object(); myObject.message = "Hello!"; myObject.eventHandler = function() { alert(this.message); } $("mydiv").onmouseover = myObject.eventHandler; Traditionally, you'd get an error because, when the event triggers the handler function, this refers to the mydiv element, not myObject, so this.messageis undefined. You can solve this problem using the bindmethod like so: $("mydiv").onmouseover = myObject.eventHandler.bind(myObject); Now it all works fine, because the this keyword is bound to myObject. Further to that, bindAsEventListenerdoes that same thing, though it passes the event object through to your function in a cross-browser compatible way, so you no longer need to worry about window.eventin IE. Try this: myObject.eventHandler = function(event) { alert(event.srcElement.nodeName); } $("mydiv").onmouseover = myObject.eventHandler.bindAsEventListener(myObject); Now our eventHandlerfunction has access to the event object. Much more detail on these two methods is available at their creator's site. New String and Number Methods Prototype has added an enormous number of useful methods to the built in String object. Let's have a quick look at some of the best. // "backgroundColor" "background-color".camelize() camelizeturns hyphenated strings to camel case strings that you can use to work with CSS properties. // "I am a piece of HTML" "I am a piece of <strong>HTML</strong>".striptTags() // {a : 10, b: "thing"} "a=10&b=thing".toQueryParams() Prototype adds a great method to Number, too. Say goodbye to your forloops! // alerts "1", "2", "3" ... "50" 50.times(function(n) { alert(n); }}; Here, the times method takes a function that will be called the given number of times, and passes in the current iteration number as an argument. This use of an iterator function is common when using Enumerable, which we'll discuss next. Iterating the Ruby way: Enumerable and Hash One of the hidden gems of Prototype is the Enumerablemix-in and the Hashobject, which have been poached straight out of Ruby. If you're not familiar with Ruby, don't worry. I'll explain it all here. We'll start with Enumerable. In short, when we add Enumerableto an object using Object.extend, it gives the object in question lots of really useful functions for working with its properties. Enumerable has been added to Array's prototype, so any array has these new methods. Here are a few examples of what you can do with the new "enumerated" arrays: // alerts "a is at 0" then "b is at 1" then "c is at 2" ["a", "b", "c"].each(function(item, index) { alert(item + " is at " + index); }); // [80,50] [1, 80, 3, 50].select(function(item) { return (item > 20); }); selectcreates a new array that contains only the elements that make the function return true. // ["A", "B", "C"] ["a", "b", "c"].invoke("toUpperCase"); invokecalls the specified method of each element of the array and returns the resulting array. // ["cat", "rat"] ["cat", "dog", "rat", "mouse",].grep(/at/); grepreturns all elements that match the given regular expression. Enumerableoffers a large number of incredibly powerful functions that can make many tedious DOM scripting tasks a breeze. I strongly suggest you have a good look at the Enumerablemethods in Sergio Pereira's extremely useful developer notes. There's a small problem here, though. In JavaScript, you can come across many types of objects that, to all intents and purposes, act like arrays but aren't Arrayobjects. Objects such as DOM NodeListsand function arguments won't have Enumerableavailable to them automatically. This is easy to rectify, though; to add the Enumerablefunctions to any array-like object, use $A: // add Enumerable to childNodes var children = $A($("mydiv").childNodes); // sets class="highlighted" for all child nodes of "mydiv" children.each(function(child) { child.setAttribute("class", "highlighted"); }); To create a hash, call the magic function $Hon any object. This turns all the properties of the object into a set of key-value pairs with Enumerablemixed in. Let's take hashes for a spin: // create a hash by feeding an object to $H var contact = $H({ name : "Dan Webb", address : "None of your Business, London", postcode : "a111111" }); // ["name", "email", "address", "postcode"] contact.keys() // ["Dan Webb", "dan@danwebb.net","None of your Business, London", "a111111"] contact.values() // "name=Dan Webb&email=..." contact.toQueryString() Hashextends Enumerableas well, so all those useful methods are also available... // alerts "name contains Dan Webb" and so on contact.each(function(item) { alert(item.key + " contains " + item.value); }); At first, if you're not a Rubyist, Enumerableand Hashmay seem a bit of a hassle but I can assure you, once you start using them, you'll wonder why you ever bothered getting RSI writing all those for loops! When you use one or more of them together, you'll realise the massive power of these new methods. You can read about Enumerableand Hashin more detail at Encyte Media. The Eventobject helps to provide what, to many, is the holy grail of JavaScript: simple, cross-browser event handling: function eventHandlerFunction(e) { // the element that triggered the event var element = Event.element(e); // gets the mouse position var mouseX = Event.pointerX(e), mouseY = Event.pointerY(e); // stop default behaviour and event propagation Event.stop(e); } // register eventHandlerFunction to the onclick of myObject Event.observe(myObject, "click", eventHandlerFunction, false); // removes the event handler Event.stopObserving(myObject, "click", eventHandlerFunction, false); In a rather pleasant way, Prototype tries to avoid those pesky memory leaks in IE by automatically removing every observer when the page unloads. In my opinion, though, this is a rather under-developed event handling solution at the moment, so it might be worth considering using something a bit richer like Dean Edwards's addEventfor the time being. Handling Forms The Formand Fieldobjects provide a number of simple but convenient functions for working with forms and input fields, as well as code that supports Prototype's AJAX implementation. The Form Object Generally, methods of the Formobject take either an ID or an object reference to an element: // disables the form making all elements read only Form.disable(form) // enables a form again Form.enable(form) // clears values from all form elements Form.reset(form) // returns an array of all form fields in the form Form.getElements(form) // focuses on the first form field Form.focusFirstElement(form) The Field Object The Fieldobject deals with individual form elements, and its methods typically take an ID or an object reference to the element in a similar way to the Formobject: // clears the field, will accept any number of arguments Field.clear(field) // returns true if all given fields have a value Field.clear(field, anotherField) // gives focus to the field Field.focus(field) // selects any text in the field Field.select(field) Form Serialisation In Prototype terms, serializing a form means reading all the form's elements and turning them into a URL-encoded string (nearly) identical to the one that would be sent if you submitted the form. For example, consider this form: <form id="search" action="search.php" method="post"> <input type="text" name="query" value="thing" /> <select name="field"> <option value="artistname">Artist Name</option> <option value="title" selected="selected">Title</option> <input type="submit" name="submit" value="Search" /> </form> // query=thing&field=title&submit=Search Form.serialize($("search")) Notice that Form.serializecleverly smoothes over the differences between the ways in which different form elements are accessed, so that inputs, selects, checkboxes and radio buttons are handled properly. Form.serializeis useful for several tasks, but comes into its own when we're working with AJAX, as we'll see shortly. Form.serializeexhibits some strange behaviour that's worth mentioning here. You'll remember that I said the URL-encoded string that Form.serializeproduces is nearly identical to the one that would be sent if you submitted the form. Well, it's "nearly identical" because Form.serialize doesn't deal with submit button or image inputs properly. It includes all submit buttons in the string, regardless of whether or not they've been pressed, and completely ignores image and button inputs. As long as you're aware of this, you can code around it. Form Observers Form.Observerand Form.Element.Observerallow. You can use it like this: new Form.EventObserver($("myform"), myCallBackFunction); new Form.Element.EventObserver($("myfield", myCallbackFunction); If all the fields in the form you're observing support an event handler, this is a much more efficient way to observe the form. However, if you want to watch for changes in elements that don't support these events, use the periodic observers. Working the DOM Prototype has 4 objects ( Element, Insertion, Observer, and Position) that allow various forms of DOM manipulation and smooth over many of the browser differences that make dealing with the DOM so screen-smashingly infuriating. Instead of throwing your computer out the window, have a look through this section. The Element Object The Element object works in the way you've probably come to expect by this point: most of Element's methods simply take an ID or an object reference to the element you want to manipulate. Here's a peek at some of the most useful methods: // Hides an element Element.hide(element) // Shows an element Element.show(element) // Adds a CSS class to the element Element.addClassName(element, "cssClassName") // Removes a CSS class from the element Element.removeClassName(element, "cssClassName") // Returns true if element has the CSS class Element.hasClassName(element, "cssClassName") // {width: 394, height: 20} Element.getDimensions(element) // replaces the innerHTML of element with newHtml Element.update(element, newHtml) See the full list at Sergio Pereira's site. The Insertion Object I know what you're thinking: this sounds a bit weird, right? Well, the Insertionobject adds chunks of HTML in and around an element. There are 4 types of insertion: Before, After, Topand Bottom. Here's how you'd add some HTML before an element with the ID "myelement": new Insertion.Before("myelement", "<p>I'm before!</p>"); This diagram shows where each type of Insertion will drop your HTML content in relation to the given element. The Position Object The Positionobject offers a load of methods that can tell you about a given location on the screen, and provide information about that location relative to other elements, in a cross-browser compatible way. This should take much of the fiddliness out of writing animations, effects and drag-and-drop code. Have a look at the Positionreference for more details. Get your Web 2.0 On "Finally!" you're thinking, "He's got on to what we really want to know about." Yes, I've left it to the end to get into Prototype's AJAX helpers, because they're built on top of all the other stuff we've been going through, and it helps to understand Prototype's form serialization, observers and insertions when we talk about AJAX. AJAX, in case you've been buried in a very deep hole for the past couple of years, refers to using the browser's XMLHttpRequestobject (or equivalent) to communicate with the server without reloading the page. Prototype smoothes over most of the detail, but it's still good to get a bit of background on XMLHttpRequest, which you'll find in this article by Cameron Adams. So, now you're all pumped to get some Web 2.0 action, let's look in to a really simple AJAX request: new Ajax.Request("hello.php", { onSuccess : function(resp) { alert("The response from the server is: " + resp.responseText); }, onFailure : function(resp) { alert("Oops, there's been an error."); }, parameters : "name=Fred" }); The Ajax.Requestconstructor takes a URL and an options object. In this case, we're sending a parameter (name) to hello.php, and alerting its response (or alerting an error if it doesn't work). It's worth taking the time to get familiar with what options are available; here's an overview of the options, along with their defaults: Prototype adds a custom HTTP header to all its AJAX requests so that your server application can detect that it's an AJAX call, rather than a normal call. The header is: X-Requested-With: XMLHttpRequest Here's an example PHP function used to detect an AJAX call: function isAjax() { return isset($_SERVER['HTTP_X_REQUESTED_WITH']) && $_SERVER ['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest'; } Using this approach, you can write AJAX applications that work even if the user is using an old browser or has JavaScript disabled, but that's a whole other article... Using Form.serializeto Pass Data to Ajax.Request As we've seen above, the parameters option is used to pass a URL-encoded string of variables. If the data you need to send is set by a form, as it is with most AJAX applications, you can simply use Form.serialize to generate a URL-encoded string from all of your form fields and pass that into the parameters option like so: function addComment(e) { // submit the form using Ajax new Ajax.Request("comment.php", { parameters : Form.serialize(this), onSuccess : updateComment }); Event.stop(e); } Event.observe($("commentform"), "submit", addComment, false); Writing AJAX Event Handlers In the example above, onSuccessand onFailureare two examples of AJAX event handlers. Event handler functions given in the options object of an Ajax.Requestcall are given one argument, which is the XMLHttpRequestobject for that AJAX call. I normally call this argument responseor resp. You can use this argument to get the response from the server like so: function successHandler(resp, jsonObj) { // returns the response from the server as raw text resp.responseText // returns the response as an XML document that you can navigate with the DOM resp.responseXML // alert some property of the returned JSON alert(jsonObj.name); } Remember, though, that respis just the XMLHttpRequestobject, so all of those properties are available. You can send data as JSON from your server by adding the JSON data to the X-JSON response header. This will then automatically be evaluated by Prototype, and sent as the second argument. The Ajax.Updaterand Ajax.PeriodicalUpdater Many AJAX operations simply involve updating some HTML on your page with HTML returned from the server. The Ajax.Updaterobject wraps Ajax.Requestand simplifies this common use case for us. Here's a simple example: new Ajax.Updater("mydiv", "hello.php", { parameters : "name=Fred", onFailure : function(resp) { alert("Oops, there's been an error."); } }); The above snippet would simply replace the contents of the element whose ID was "mydiv" with whatever content was returned from the server. Ajax.PeriodicalUpdateris similar, but makes the Ajax call repeatedly at an interval that you set: new Ajax.PeriodicalUpdater("mydiv", "hello.php", { // initial number of seconds interval between calls frequency : 1, decay : 2 }); The decayoption allows you to give your server a bit of a break if it's returning a lot of identical responses. Essentially, every time PeriodicalUpdatermakes with Responders AJAX responders allow you to register global event handlers that are triggered for each and every AJAX request that happens on the page. They're very useful for managing applications with large amounts of AJAX activity. For instance, you can use them to show a standard loading animation whenever an AJAX request is happening: Ajax.Responders.register({ onCreate : showLoader, onComplete : hideLoader }); If you are looking for some working examples of AJAX with Prototype, try this article. Where to Next? As we've seen through this article, Prototype not only is useful on its own, but provides an excellent starting point for writing other, more specialized libraries. That's exactly what a growing number of people have been doing. Script.aculo.us and Moo.fx Thomas Fuchs' script.aculo.us is getting a lot of attention at the moment for its whiz-bang effects and clever UI widgets. It was originally part of the core Prototype library, but soon grew out of control and broke free of its parent. Using Prototype as a basis, script.aculo.us specialises in providing a rich user experience through animated effects, simple to use drag and drop functionality, and powerful UI components. There's a nice Wiki on the site, with a rapidly growing store of quality documentation to help you get started, and examples pages to get your creative juices flowing. As script.aculo.us is getting rather large in file size, it's been split into several files, so your users won't have to download the whole library just so you can use a few slide effects. However, even the individual files are pretty bulky. If you're after some simple effects, I'd really recommend Moo.fx. It's only 3k in size, and gives you some toggling slide and fade effects that, often, are all that's required in a simple AJAX application. It's also a great starting point if you want to write your own effects. Have a look at the code to see a great example of programming using Prototype's Object.extend to provide simple inheritance. Valerio is obviously very focused on keeping his script file sizes down, so he even has a 'lite' version of Prototype (chopped to around 10k), and a lean version of Ajax.Request, which I find myself using more often than the full Prototype library. It's definitely worth a look. Behaviour Behaviour is a great addition to your DOM scripting toolkit that allows you to use CSS selectors to add behaviour to your documents. Here's a sample of what it allows you to do: Behaviour.register({ "#comment_form form" : function(el) { // catch form submission and complete with XHR if possible el.onsubmit = function() { Form.disable(this); new Ajax.Request(this.action, { method: this.method, postBody: Form.serialize(this), onComplete: updateComments}); return false; }; } }); Read more about this over at the Behaviour site. It's now possible to achieve a similar type of thing using the brand new $$ function discussed earlier, so this may eventually become redundant. jQuery jQuery is a compact little library that plays well with Prototype and creates a superhero version of the $function that can take XPath and CSS 3 selectors. It couples that capability with some extremely clever method chaining approach that makes for very concise code. Watch out for this one. Wrap up Prototype is a powerful piece of kit when it comes to adding some DOM scripted spice to your web applications. Unfortunately, its capabilities and tools have grown at a much faster rate than its documentation! We've managed to cover every corner of Prototype in this article, but not every method. I hope, though, that you now know where to go to get the extra information you need. If you want to know more try Ronnie Roller's prototypedoc.com, a resource that keeps up with the growing body of Prototype documentation. Have fun! - Domenic - Boris
http://www.sitepoint.com/painless-javascript-prototype/
CC-MAIN-2014-23
refinedweb
4,057
56.35
import "github.com/VictoriaMetrics/fastcache" Package fastcache implements fast in-memory cache. The package has been extracted from bigcache.go fastcache.go file.go malloc_mmap.go type BigStats struct { // GetBigCalls is the number of GetBig calls. GetBigCalls uint64 // SetBigCalls is the number of SetBig calls. SetBigCalls uint64 // TooBigKeyErrors is the number of calls to SetBig with too big key. TooBigKeyErrors uint64 // InvalidMetavalueErrors is the number of calls to GetBig resulting // to invalid metavalue. InvalidMetavalueErrors uint64 // InvalidValueLenErrors is the number of calls to GetBig resulting // to a chunk with invalid length. InvalidValueLenErrors uint64 // InvalidValueHashErrors is the number of calls to GetBig resulting // to a chunk with invalid hash value. InvalidValueHashErrors uint64 } BigStats contains stats for GetBig/SetBig methods. Cache is a fast thread-safe inmemory cache optimized for big number of entries. It has much lower impact on GC comparing to a simple `map[string][]byte`. Use New or LoadFromFile* for creating new cache instance. Concurrent goroutines may call any Cache methods on the same cache instance. Call Reset when the cache is no longer needed. This reclaims the allocated memory. LoadFromFile loads cache data from the given filePath. See SaveToFile* for saving cache data to file. LoadFromFileOrNew tries loading cache data from the given filePath. The function falls back to creating new cache with the given maxBytes capacity if error occurs during loading the cache from file. New returns new cache with the given maxBytes capacity in bytes. maxBytes must be smaller than the available RAM size for the app, since the cache holds data in memory. If maxBytes is less than 32MB, then the minimum cache capacity is 32MB. Del deletes value for the given k from the cache. k contents may be modified after returning from Del. Get appends value by the key k to dst and returns the result. Get allocates new byte slice for the returned value if dst is nil. Get returns only values stored in c via Set. k contents may be modified after returning from Get. GetBig searches for the value for the given k, appends it to dst and returns the result. GetBig returns only values stored via SetBig. It doesn't work with values stored via other methods. k contents may be modified after returning from GetBig. Has returns true if entry for the given key k exists in the cache. HasGet works identically to Get, but also returns whether the given key exists in the cache. This method makes it possible to differentiate between a stored nil/empty value versus and non-existing value. Reset removes all the items from the cache. SaveToFile atomically saves cache data to the given filePath using a single CPU core. SaveToFile may be called concurrently with other operations on the cache. The saved data may be loaded with LoadFromFile*. See also SaveToFileConcurrent for faster saving to file. SaveToFileConcurrent saves cache data to the given filePath using concurrency CPU cores. SaveToFileConcurrent may be called concurrently with other operations on the cache. The saved data may be loaded with LoadFromFile*. See also SaveToFile. Set stores (k, v) in the cache. Get must be used for reading the stored entry. The stored entry may be evicted at any time either due to cache overflow or due to unlikely hash collision. Pass higher maxBytes value to New if the added items disappear frequently. (k, v) entries with summary size exceeding 64KB aren't stored in the cache. SetBig can be used for storing entries exceeding 64KB. k and v contents may be modified after returning from Set. SetBig sets (k, v) to c where len(v) may exceed 64KB. GetBig must be used for reading stored values. The stored entry may be evicted at any time either due to cache overflow or due to unlikely hash collision. Pass higher maxBytes value to New if the added items disappear frequently. It is safe to store entries smaller than 64KB with SetBig. k and v contents may be modified after returning from SetBig. UpdateStats adds cache stats to s. Call s.Reset before calling UpdateStats if s is re-used. type Stats struct { // GetCalls is the number of Get calls. GetCalls uint64 // SetCalls is the number of Set calls. SetCalls uint64 // Misses is the number of cache misses. Misses uint64 // Collisions is the number of cache collisions. // // Usually the number of collisions must be close to zero. // High number of collisions suggest something wrong with cache. Collisions uint64 // Corruptions is the number of detected corruptions of the cache. // // Corruptions may occur when corrupted cache is loaded from file. Corruptions uint64 // EntriesCount is the current number of entries in the cache. EntriesCount uint64 // BytesSize is the current size of the cache in bytes. BytesSize uint64 // BigStats contains stats for GetBig/SetBig methods. BigStats } Stats represents cache stats. Use Cache.UpdateStats for obtaining fresh stats from the cache. Reset resets s, so it may be re-used again in Cache.UpdateStats. Package fastcache imports 14 packages (graph) and is imported by 62 packages. Updated 2020-04-03. Refresh now. Tools for package owners.
https://godoc.org/github.com/VictoriaMetrics/fastcache
CC-MAIN-2020-34
refinedweb
841
60.41
First Approach Intuition Since power of four means a number obtained by multiplying 4 by itself a number of times we can try the reverse operation (divide by 4.0). Algorithm We will start with the given number & loop until it's less than 1. Then if it's exactly 1 it means that this number is a power of four. Otherwise it's not a power of four. Ruby def is_power_of_four(num) while num > 1 num /= 4.0 end num == 1.0 end Notice that we need to divide by 4.0 & not just 4 for this to work. Complexity Analysis Since we keep dividing by 4 the time complexity should be O(log4 n). Second Approach Intuition We can use a logarithm to check if the number is a power of another. Algorithm We will use our programing language Math module to compute the logarithm & then check if it's a whole numbers (no decimal). Ruby def is_power_of_four(num) return false if num == 0 return false if num < 0 log = Math.log(num.abs, 4) log == log.round end Complexity Analysis Same as our first approach, but since this is implemented by programing language it will probably perform better.
https://discuss.leetcode.com/topic/101481/solution-by-matugm
CC-MAIN-2018-22
refinedweb
201
66.44
When More fixes. Since I used version 0.36.0.10 for some interim fixes, this rc is 0.36.0.11. Changes: Binaries available here: ikvmbin-0.36.0.11.zipSources (+ binaries):ikvm-0.36.0.11.zip More fixes and a couple of optimizations. The bytecode optimizations and the removal of assert statements resulted in IKVM.OpenJDK.ClassLibrary.dll actually becoming smaller for once: The size difference is about evenly split between the bytecode optimizations and the assert statement removal. The ikvmc option to remove assert statements is a bit of a cheat, but it turns out that in at least one case (java.util.BitSet) the assertions significantly affect performance (even when they are disabled) on .NET. This is the result of the CLR JIT not optimizing them away whereas HotSpot completely removes them when assertions aren't enabled. Given that this was affecting a real world scenario for an ikvm user I decided to add this option and compile the core class library with it. The option will only remove assert statements that it recognizes (i.e. the example code pattern mentioned in appendix IV of the assert spec). Binaries available here: ikvmbin-0.36.0.9.zipSources (+ binaries):ikvm-0.36.0.9.zip More changes triggered by the changeover to .NET 2.0.I also did some C# 3.0 work which means that you can now do this: using System; using System.Collections; using ikvm.extensions; using java.util; public class Program { static void Main(string[] args) { try { var map = new HashMap { { "foo", args.getClass() }, { "bar", 2.getClass() } }; foreach (DictionaryEntry de in map) Console.WriteLine("{0} = {1}", de.Key, de.Value); } catch (System.Exception x) { x.printStackTrace(); } } } BTW, to enable defining extension methods without taking a dependency on System.Core.dll, I've defined my own copy of System.Runtime.CompilerServices.ExtensionAttribute in IKVM.Runtime.dll (it's only public during the first compilation pass, so it does not interfere with the real one in System.Core.dll if your project references both IKVM.Runtime.dll and System.Core.dll). WARNING: THIS IS A DEVELOPMENT SNAPSHOT, NOT AN OFFICIAL RELEASE. Development snapshots are intended for evaluating and keeping track of where the project is going, not for production usage. The binaries have not been extensively tested and are not strong named. This version does not currently work with Mono. Binaries available here: ikvmbin-0.37.2970.zip I screwed up the previous release candidate build by building it on .NET .2.0. So here's a new release candidate that's built on .NET 1.1 again (and I threw in a couple of small fixes as well.) Binaries available here: ikvmbin-0.36.0.8.zipSources (+ binaries):ikvm-0.36.0.8.zip It's been a while since I've done an in depth investigation of a microbenchmark and the recent work I did on AtomicReferenceFieldUpdater to make it work in partial trust also has a nice performance impact. So let's investigate that. AtomicReferenceFieldUpdater The Microbenchmark import java.util.concurrent.atomic.*; class Test { volatile Object field; public static void main(String[] args) { AtomicReferenceFieldUpdater upd = AtomicReferenceFieldUpdater.newUpdater(Test.class, Object.class, "field"); Test obj = new Test(); for (int j = 0; j < 5; j++) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) upd.compareAndSet(obj, null, null); long end = System.currentTimeMillis(); System.out.println(end - start); } } } The Results The Differences The first thing that jumps out is that the IKVM 0.34 results show that .NET 2.0 reflection is much slower than .NET 1.1 reflection. On IKVM 0.36 the reflection implementation changed to take advantage of DynamicMethod when running on .NET 2.0, so there we see a big improvement in performance when running on .NET 2.0. DynamicMethod IKVM 0.37 has the new AtomicReferenceFieldUpdate optimization that no longer uses reflection (if it can figure out at compile time what to do), this again yields a big performance improvement. Finally, HotSpot manages to beat IKVM be a factor of two. There is no difference between HotSpot client and server modes for this benchmark (on JDK 1.6). The Compiler Let's look at some C# pseudo code that shows what ikvmc 0.37 generates for the above benchmark: using java.util.concurrent.atomic; using System.Threading; class Test { volatile object field; private sealed class __ARFU_fieldLjava/lang/Object; : AtomicReferenceFieldUpdater { public override bool compareAndSet(object obj, object expect, object update) { return expect == Interlocked.CompareExchange(ref ((Test)obj).field, (object)update, (object)expect); } // ...other methods omitted... } static void main(string[] args) { AtomicReferenceFieldUpdater udp = new __ARFU_fieldLjava/lang/Object;(); // ...rest of method omitted... } } The bytecode compiler only does this optimization if the arguments to newUpdater are constants and match up with a volatile instance reference field in the current class. newUpdater The reason this optimization only first showed up in IKVM 0.37 is that it requires the generic version of Interlocked.CompareExchange. In this particular example the non-generic version would have worked, but in the real world nearly all uses of AtomicReferenceFieldUpdater are on fields that have a more specific type than Object. Interlocked.CompareExchange The Assembly So why is HotSpot twice as fast? I modified the test slightly to make the generated assembly code easier to read by making it an infinite loop. Here's the x64 code for the loop: 00000000028C26A7 mov r10,qword ptr [r8+20h] 00000000028C26AB test r10,r10 00000000028C26AE jne 00000000028C273B 00000000028C26B4 mov r10,qword ptr [r8+28h] 00000000028C26B8 mov r11,r9 00000000028C26BB add r11,r10 00000000028C26BE xor eax,eax 00000000028C26C0 xor r10d,r10d 00000000028C26C3 lock cmpxchg qword ptr [r11],r10 00000000028C26C8 sete r12b 00000000028C26CC movzx r12d,r12b 00000000028C26E6 test dword ptr [160000h],eax 00000000028C26EC jmp 00000000028C2690 HotSpot did it's thing and was able to inline the virtual compareAndSet method. I'm pretty sure that HotSpot doesn't have special support for AtomicReferenceFieldUpdater, but this is simply the normal HotSpot devirtualization optimization at work. The lock cmpxchg instruction is the result of HotSpot having intrinsic support for sun.misc.Unsafe.compareAndSwapObject. lock cmpxchg sun.misc.Unsafe.compareAndSwapObject Let's go over the assembly instructions in detail: This looks like a HotSpot virtual method inline guard. It's checking to make sure that the object is of the expected type (if it isn't, the inlined virtual method may not be correct anymore). 00000000028C26A7 mov r10,qword ptr [r8+20h] 00000000028C26AB test r10,r10 00000000028C26AE jne 00000000028C273B I'm not sure. Some field in the AtomicReferenceFieldUpdater object is tested for null. 00000000028C26B4 mov r10,qword ptr [r8+28h] The offset to the field is loaded from the AtomicReferenceFieldUpdater object. 00000000028C26B8 mov r11,r9 The passed in object reference is moved from r9 to r11. r9 r11 00000000028C26BB add r11,r10 Add the field offset to the object reference. We now have the address of the memory location we want to update in r11. 00000000028C26BE xor eax,eax Clear rax to represent the passed in null value of the expect argument. I'm not sure why the disassembler shows the register as eax, but this instruction clears the full 64 bit rax register. rax null expect eax 00000000028C26C0 xor r10d,r10d r10 is cleared and represents the passed in null value of the update argument. r10 update 00000000028C26C3 lock cmpxchg qword ptr [r11],r10 The actual interlocked compare and exchange instruction. The qword at memory location r11 is compared with rax and if it matches r10 is written to it. Since I'm on a dual core machine, the lock prefix is applied. Locking the bus is expensive, so HotSpot omits it when running on a single core machine. lock 00000000028C26C8 sete r12b 00000000028C26CC movzx r12d,r12b The cmpxchg instruction sets the zero flag if it was successful. These two instruction copy the zero flag into the r12 register (it is set to 0 or 255 to represent either false or true). Since the result isn't actually used in this case, this could have been optimized away. cmpxchg r12 This is a little interesting. It takes the address of the field that was just (potentially) updated and shifts it to the right by 9 bits and uses that value to index a static table and clear the corresponding byte. This is a GC write barrier. The GC consults the table (known as a card table) to know what objects in older generations it needs to scan when doing a GC of a younger generation. 00000000028C26E6 test dword ptr [160000h],eax This seemingly useless test is part of a mechanism used by the VM to suspend the thread at this instruction (a safepoint). When the VM wants to suspend all threads (for a GC) it unmaps the safepoint polling memory page (in this case at 0x160000) and waits for all threads to suspend. Each thread running compiled Java code will eventually run this instruction and cause a page fault, inside the page fault handler it is detected that a safepoint thread suspend is requested and the thread calls the VM to suspend itself. 00000000028C26EC jmp 00000000028C2690 Branch to the top and start over again. The Conclusion The .NET Framework JIT doesn't inline virtual methods and Interlocked.CompareExchange is not a JIT instrisic, so there the story is pretty straightforward. Each loop iteration calls Interlocked.CompareExchange which in turn calls the GC write barrier function. This is why HotSpot is able to beat IKVM 0.37 by a factor of two. Interlocked.CompareExchange Of course, when you're coding in C# you can write the microbenchmark to call Interlocked.CompareExchange directly: using System; using System.Threading; class Test { volatile object field; static void Main(string[] args) { Test obj = new Test(); for (int j = 0; j < 5; j++) { int start = Environment.TickCount; for (int i = 0; i < 10000000; i++) Interlocked.CompareExchange(ref obj.field, null, null); int end = Environment.TickCount; Console.WriteLine(end - start); } } } This runs in 265 milliseconds which goes to show that in this case all the fancy footwork that HotSpot does can almost be matched simply by having by ref argument passing in your language. Of course, the CLR JIT isn't perfect. When you change the field type to string the running time increases to 436 milliseconds because the invocation of a generic method goes through a stub that makes sure that the method instantiation exists. Here it would probably pay to to teach the JIT about the generic methods in System.Threading.Interlocked. string System.Threading.Interlocked One more bug fix. Binaries available here: ikvmbin-0.36.0.7.zipSources (+ binaries):ikvm-0.36.0.7.zip The external sources are the same as the previous rc. Suppose you have a Type that you know implements IEnumerable, how do you get the GetEnumerator method? Type IEnumerable GetEnumerator Both the developer(s) at Microsoft and the Mono developer(s) that wrote System.Xml.Serialization managed to find the same wrong answer to this question. Here's the code from Mono, but Microsoft does somethig very similar: System.Xml.Serialization MethodInfo met = type.GetMethod ("GetEnumerator", Type.EmptyTypes); if (met == null) { // get private implemenation met = type.GetMethod ("System.Collections.IEnumerable.GetEnumerator", BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance, null, Type.EmptyTypes, null); } The reason this is wrong is that the name of the private method is an implementation detail of the compiler that was used to compile the code. C# happens to name the private method this way, but other languages may not. For example, try the following VB webservice: <%@ WebService Language="VB" Class="Service1" %> Imports System.Collections Imports System.Web.Services Public Class Service1 Inherits WebService <WebMethod()> Public Function HelloWorld() As Frob Return New Frob() End Function End Class Public Class Frob Implements IEnumerable ' Note that this method is private Private Function GetEnumerator() As IEnumerator Implements IEnumerable.GetEnumerator Return "frob".GetEnumerator() End Function Public Sub Add(ByVal obj As Object) Throw New NotSupportedException() End Sub End Class If you run this webservice on .NET, you'll an exception inside System.Xml.Serialization.TypeScope.GetTypeDesc(), because it expects to find a public GetEnumerator method or a private System.Collections.IEnumerable.GetEnumerator method. However, VB allows you to pick the method name (in this case simply GetEnumerator). If you make the GetEnumerator method public, the webservice will work. System.Xml.Serialization.TypeScope.GetTypeDesc() GetEnumerator System.Collections.IEnumerable.GetEnumerator The Right Way Here's the right way to get the method: MethodInfo getEnumerator = typeof(IEnumerable).GetMethod("GetEnumerator", Type.EmptyTypes); InterfaceMapping map = type.GetInterfaceMap(typeof(IEnumerable)); int index = Array.IndexOf(map.InterfaceMethods, getEnumerator); MethodInfo meth = map.TargetMethods[index]; Meta Question Of course, the real question remains unanswered. Why do we need the MethodInfo in the first place? Wouldn't Xml serialization be better off by simply using the IEnumerable interface? MethodInfo IEnumerable As I've said when I released 0.36, this is a version that will be maintained for a while because it is the last version to support .NET 1.1. This means that I will strive to release updates relatively soon after bugs are reported (in the supported areas). This is the first of such update releases. Binaries available here: ikvmbin-0.36.0.6.zipSources (+ binaries):ikvm-0.36.0.6.zip The external sources are the same as the ones with the first 0.36 release + java.awt.image.Raster patch referenced above.
http://weblog.ikvm.net/default.aspx?date=2008-04-07
CC-MAIN-2018-47
refinedweb
2,221
58.79
![if !(IE 9)]> <![endif]> We would like to suggest reading the series of articles dedicated to the recommendations on writing code of high quality using the examples of errors found in the Chromium project. This is the first part which will be devoted to the memset function. We must do something about the memset function in C++ programs! Rather, it is clear what we must do at once - we have to stop using it. I wrote the article "The most dangerous function in the C/C++ world" at the time. I think it is easy to guess that this article will be exactly about memset. However, I will not waste words, so I am going to demonstrate the danger of this function once again with the examples. The code of the Chromium project and the libraries used in it are of very high quality. Google developers pay much attention to the tests and the use of various tools for detecting defects. For instance, Google has developed such tools as AddressSanitizer, ThreadSanitizer and MemorySanitizer. As a result, there are few errors related to memset function, but sadly, that they are still presented. Despite the errors, it is a very qualitative project! Let's see what I noticed during studying the report issued by the PVS-Studio. As I wrote in the introductory article, I looked through the report quite fluently, so there may be other, unnoticed errors. However, the found defects will be enough for us to discuss the malloc function. The first type of errors is related to the incorrect calculation of the buffer size. Or, in other words, the problem is that there is confusion between the size of the array in bytes, and the number of elements in the array. Such errors may be classified as CWE-682: Incorrect Calculation. The first example of the error is taken directly from the Chromium project code. Note that the arrays text and unmodified_text consist of unicode characters. #if defined(WIN32) typedef wchar_t WebUChar; #else typedef unsigned short WebUChar; #endif static const size_t kTextLengthCap = 4; class WebKeyboardEvent : public WebInputEvent { .... WebUChar text[kTextLengthCap]; WebUChar unmodified_text[kTextLengthCap]; .... }; As a result, only half of the elements in these arrays is filled with zeros: WebKeyboardEvent* BuildCharEvent(const InputEventData& event) { WebKeyboardEvent* key_event = new WebKeyboardEvent(....); .... memset(key_event->text, 0, text_length_cap); memset(key_event->unmodified_text, 0, text_length_cap); .... } PVS-Studio warnings: The second example of the error is taken from the WebRTC library used in Chromium. The error is similar to the previous bug: it is not taken into account that the elements in the array are of int64_t type. class VCMRttFilter { .... enum { kMaxDriftJumpCount = 5 }; .... int64_t _jumpBuf[kMaxDriftJumpCount]; int64_t _driftBuf[kMaxDriftJumpCount]; .... }; void VCMRttFilter::Reset() { _gotNonZeroUpdate = false; _avgRtt = 0; _varRtt = 0; _maxRtt = 0; _filtFactCount = 1; _jumpCount = 0; _driftCount = 0; memset(_jumpBuf, 0, kMaxDriftJumpCount); memset(_driftBuf, 0, kMaxDriftJumpCount); } Here only the first element of the array is set to null, and one byte in the second element. PVS-Studio warning: V512 CWE-682 A call of the 'memset' function will lead to underflow of the buffer '_jumpBuf'. rtt_filter.cc 52 Recommendation To avoid such errors do not use memset any more. You may be really careful, but sooner or later errors will get passed around in your project anyway. In Chromium the situation is quite favorable. Nevertheless, in other projects it is a very common problem (proof). Yes, it is impossible to avoid the use of memset in C code. However, if we are talking about C++, let's forget about this function. Do not use memset function in C++ code. Do not use, end of story. How to replace the memset call? Firstly, you can use the std: fill function. In this case, a filling of an array will look like this: fill(begin(key_event->text), end(key_event->text), 0); Secondly, you should not often use a call of special functions. Typically, memset function serves to initialize local arrays and structures. Classic example: HDHITTESTINFO hhti; memset(&hhti, 0, sizeof(hhti)); But you can write much easier and safer: HDHITTESTINFO hhti = {}; If we are talking about the constructor: class C { int A[100]; public: C() { memset(A, 0, sizeof(A)); } }; It is possible to write as follows: class C { int A[100] = {}; public: C() { } }; Developers sometimes forget that the second argument sets the value of a single byte that is used to fill the buffer. What is confusing is that the second argument of the memset function is of int type. As a result, such errors appear, which can be classified as CWE-628: Function Call with Incorrectly Specified Arguments. Let's look at the example of such an error that I noticed in the V8 engine, used in the Chromium project. void i::V8::FatalProcessOutOfMemory( const char* location, bool is_heap_oom) { .... char last_few_messages[Heap::kTraceRingBufferSize + 1]; char js_stacktrace[Heap::kStacktraceBufferSize + 1]; i::HeapStats heap_stats; .... memset(last_few_messages, 0x0BADC0DE, Heap::kTraceRingBufferSize + 1); memset(js_stacktrace, 0x0BADC0DE, Heap::kStacktraceBufferSize + 1); memset(&heap_stats, 0xBADC0DE, sizeof(heap_stats)); .... } PVS-Studio warnings: A developer decided to fill the memory blocks with the 0x0BADC0DE value, so that it was easier to understand the situation when debugging. However, the memory space will be filled with the byte with the 0xDE value. What a programmer does in code is a low-level operation and here it is harder to do without memset than in the situations described earlier. The buffers' size is not multiple to 4 bytes, so a usage of std::fill will not work as earlier. A programmer will have to write and use his own function. void Fill_0x0BADC0DE(void *buf, const size_t size) { const unsigned char badcode[4] = { 0xDE, 0xC0, 0xAD, 0x0B }; size_t n = 0; generate_n(static_cast<char *>(buf), size, [&] { if (n == 4) n = 0; return badcode[n++]; }); } Recommendation There is no any special recommendation. Once again, we have seen that memset function is actually not needed here, as it does not solve the programmer task. Memset function is used for clearing of private data after it is no longer needed. This is wrong. If a buffer with private data is not used in any way after the call of memset, the compiler may remove the call to this function. This defect is classified as CWE-14: Compiler Removal of Code to Clear Buffers. I already anticipate the objection that a compiler cannot remove a memset calling. It can. It does it in terms of optimization. To understand the topic, I would like to suggest to carefully study the following article "Safe Clearing of Private Data". Let's see how these errors look like in practice. We will start the WebRTC library used in Chromium. void AsyncSocksProxySocket::SendAuth() { .... char * sensitive = new char[len]; pass_.CopyTo(sensitive, true); request.WriteString(sensitive); // Password memset(sensitive, 0, len); delete [] sensitive; DirectSend(request.Data(), request.Length()); state_ = SS_AUTH; } PVS-Studio warning: V597 CWE-14 The compiler could delete the 'memset' function call, which is used to flush 'sensitive' object. The RtlSecureZeroMemory() function should be used to erase the private data. socketadapters.cc 677 Memset function will be removed by a compiler in a Release version with a probability close to 100%. Ow ow ow! The password will remain hanging out somewhere in memory and, theoretically, can be sent somewhere. I am serious, this really happens. In the same library I came across 3 more similar errors. I will not describe them because they are similar. I will just cite only the appropriate analyzer messages: Recommendation Never use the memset function for clearing private data! You should use special memory clearing functions that the compiler is not allowed to remove for its optimization purposes. Note. This concerns not only C++ programmers, but also C programmers as well. Visual Studio, for instance, offers the RtlSecureZeroMemory function. Starting with C11, you can use the memset_s function. If necessary, you can create your own secure function. There are a lot of examples in the internet, how to write it. Here are some of the options.; } In the case of Chromium, probably, it is reasonable to use the function OPENSSL_cleanse. If you are writing a C++ program and you want to write a function call to memset, then stop. Most likely, you will do great without this dangerous function. ...
https://www.viva64.com/en/b/0553/
CC-MAIN-2018-09
refinedweb
1,360
56.45
Hi! I’ve been using Django since 10 years now, ranging from dozens of personal projects to highly active websites accommodating millions of users. I’m using Django to showcase but this is really about the Web and HTTP, so there’s no reason it can’t be applied in {insert your favorite web framework}. Hi! As web developers we have to be mindful about a lot of attack types and follow best practices to keep our users and data safe. My focus point during my master’s studies was network security, and I’m passionate about this topic and follow up recent developments in this area. In my professional life, I’m managing the backend of a website with many millions of users, this puts a lot of pressure on our shoulders to do our best. Recently we have come across credential stuffing attacks. Credential stuffing attacks are badass and gets you thinking, whose fault is that… Very short answer: Add the following snippet to the top of your wp-config.php file. if (strpos($_SERVER[‘HTTP_X_FORWARDED_PROTO’], ‘https’) !== false) $_SERVER[‘HTTPS’]=’on’; else $_SERVER[‘HTTPS’]=’off’; Longer answer: Hi! I recently was setting up a WordPress website for our CV Application []() and suddenly wp-admin URL started redirecting to itself, to the same exact URL. And when this happens enough times, Google Chrome just spits out the “err_too_many_redirects” error, as there’s probably an infinite loop. I’m using Cloudflare to manage my SSL and all the caching, plus some more. My Cloudflare SSL setting was set to “Flexible” mode, this works as following, Browser sends… Hey everyone! In Part I of this series, we have investigated how to cache a long-running function, and what the common pitfalls are with caching. Now it will get real and a bit more advanced, this will be production-grade code that has been battle-tested with millions of users :) Want to read this story later? Save it in Journal. This is the current situation, it works OK. What I don’t like about this is that if 100 requests come in burst, all of them will experience a cache-miss hence start calculating and blocking the gunicorn workers. … Hi! Caching is the most important aspect of scaling along with database indices. It’s not impossible to get up to 99% speed improvements using either one, when done right. I’ve been using Django over 6+ years and I will be sharing my learnings when I had to scale beyond millions of users. The techniques and insights I share by no means are limited to Python nor Django, they are universal. For this tutorial Django's built-in in-memory cache will be enough, you don’t need to set up a redis or memcache server. You can find all the code I use under… Hey everyone! Last month we rotated our secret keys in our production environment, to keep up with the best security practices. There was some friction and I will tell you how to avoid it. There are millions of users in this particular app so tiny bit of convenience multiplied by a million can be a problem. This solution will work if you are using database backed sessions, which I assume majority of websites do and that’s the default configuration (), otherwise still read on it will help you to roll out your version that works for your session engine. Özgeçmiş yazmak stresli bir deneyim olabilir, çünkü bunun nasıl doğru olacağına dair çok fazla kaynak ve fikir var. Hangi kılavuzun izleneceğini bilmek zor olabilir, aynı zamanda format, içerik, renk kullanımı ve bir fotoğraf ekleyip eklememek konusunda en iyi yaklaşımın ne olduğu. Özgeçmiş hazırlamak istihdam fırsatlarına başvururken aday deneyimini ve başarılarını göstermek için yaygın olarak kullanılır. Vurgulaması istihdam tarihine olduğundan, onsuz, deneyiminizi düzenli bir şekilde vurgulamak özellikle zor olabilir. Dolayısıyla, kariyer hedeflerinizi tanımladıysanız ve potansiyel bir işveren üzerinde kalıcı bir izlenim bırakmak ve röportaj alma şansınızı artırmak istiyorsanız, iyi bir CV tam da bunu yapacaktır. Becerilerinizi ve deneyiminizi vurgulamayı seçtiğiniz yol… How to make a quick and smart CV? How to stand out in the eyes of the hiring manager when he has several CVs on his table? One way to do that is by writing an extraordinary CV. A person’s resume describes his skills and experiences, and if he does have all those things, but they’re not well written on a resume, they won’t help him get the job. But a well-written resume will do you great favors. The smart resume builder app helps you to write your CV in such an exceptional way that it describes your personality positively… If your aim is to learn programming and get to know a bit better how to world runs, the first program you should ever write is a “hello world program”. So would you rather write this: public class HelloWorld { public static void main (String[] args) { System.out.println("Hello World!"); } } or this: print(“Hello World!”) They both do the same thing, first one is Java, the plague of programming world, and the second one is Python, clear and understandable. I have been coding since 15 years now, I have used various languages in various contexts. I can see why Java… Python/Django Developer — TU München ’14
https://eralpb.medium.com/?source=post_internal_links---------6----------------------------
CC-MAIN-2021-21
refinedweb
882
60.45
SYNTAX C Syntax #include <mpi.h> int MPI_Wait(MPI_Request *request, MPI_Status *status) Fortran Syntax INCLUDE 'mpif.h' MPI_WAIT(REQUEST, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR C++ Syntax #include <mpi.h> void Request::Wait(Status& status) void Request::Wait() INPUT PARAMETER - request - Request (handle). OUTPUT PARAMETERS - status - Status object (status). - IERROR - Fortran only: Error status (integer). DESCRIPTIONA. The call returns, in status, information on the completed operation. The content of the status object for a receive operation can be accessed as described in Section 3.2.5 of the MPI-1 Standard, "Return Status." The status object for a send operation may be queried_Wait with a null or inactive request argument. In this case the operation returns immediately with empty status. NOTESSuccessful return of MPI_Wait after an MPI_Ibsend implies that the user send buffer can be reused i.e., data has been sent out or copied into a buffer attached with MPI_Buffer_attach. Note that, at this point, we can no longer cancel the send (for more information, see Section 3.8 of the MPI-1 Standard, "Probe and Cancel"). If a matching receive is never posted, then the buffer cannot be freed. This runs somewhat counter to the stated goal of MPI_Cancel (always being able to free program space that was committed to the communication subsystem). Example: Simple usage of nonblocking operations and MPI_Wait. IF generated.
http://manpages.org/mpi_wait/3
CC-MAIN-2021-49
refinedweb
226
51.14
23 September 2006 17:50 [Source: ICIS news] By Hilde Ovrebekk ?xml:namespace> MONTE CARLO (ICIS news)--High feedstock costs, competition from the Middle East and Asia and the future of quarterly olefins contracts are the burning challenges facing the European petrochemicals industry. But the sector was still thriving and should be able to cope with these as it had in the past, Albert Heuser, president of BASF’s petrochemicals division told ICIS news on Saturday on the sidelines of the annual EPCA petrochemicals conference. He said a shortage of material and engineering skills could mean some of the mega projects planned in the Middle East may become more costly than projected which could lead to delays and even some being scratched, giving the European industry some respite. “In ?xml:namespace> BASF has in the past few years invested $4bn in About 10 days ago, the company signed an agreement with This will also help the company enhance its ethylene and butadiene value chains, he said. The total additional investment at the site would amount to $500m. In “Looking to the supply situation in Heuser said margins were also hit by the prevailing quarterly ethylene and propylene contract settlements and called for a change to monthly contract negotiations. “We need monthly prices instead of quarterly prices. The market of the raw material is that volatile that it’s not a good situation for quarterly prices. That’s the case for both sides,” he said “I think it’s time now to really talk about this,” he said, adding that the company already has some bi-monthly ethylene contracts in place with some of its customers. Heuser said that long-term BASF would be looking for alternative feedstocks and at the supply chain and logistics for raw material supplies. He said this was currently just in the research stages and no firm plans on the use of alternative feedstocks such as biofuels or coal were in place yet. Heuser, who has taken over as president of BASF’s petrochemicals division from industry veteran Werner Praetorius, who retired earlier this year, said that in his new role he would develop his predecessor’s strategy further to make sure the company keeps growing. “I have known Werner Praetorius for a very long time and we have worked together on many projects,” Heuser said. “I was responsible for the restructuring of He said that some work has been done in the industry to improve its situation but that a lot more needs to be done
http://www.icis.com/Articles/2006/09/23/1093355/epca-06-interview-europe-in-tough-fight-heuser.html
CC-MAIN-2013-48
refinedweb
421
52.23
Unity 2018 2018.3.13f1 Release Notes Fixes Android: Fixed an issue on Mali GPUs where font textures would occupy 4 times more memory than on other GPUs. (1132219, 1134227) Android: Fixed crash when AndroidJavaProxy is invoked while app is in background. (1126040, 1140959) Android: Fixed GPU skinning on Mali GPUs. (1131452, 1131525) Android: Fixed Managed Stripping Level when used with Android and mono backend. (1111490, 1137052) Android: Touch and mouse position's y value will be inverted in the native backend instead of managed code in the new input system. Animation: Fixed Animator.keepAnimatorControllerStateOnDisable not keeping default values when disabling and re-enabling a GameObject. (1076483, 1137982) Asset Bundles: Fixed a case where Progressive Lightmapper data was being lost when building Asset Bundles with Scriptable Build Pipeline. (1123968, 1124315) Asset Import: Fixed import of Blender files with Blender 2.80. (1115353, 1140790) Audio: Fixed AudioClips not supported in Packages. (1069085, 1139247) Build Pipeline: Fixed an issue with Scriptable Build Pipeline not building multiple SpriteAtlas texture pages. (1139343, 1140331) Editor: Fixed issue related to label control not showing full script file names in Search Execution Order editor. (1122059, 1130263) Editor: Restricted Editor debugging to clients on same machine. (1131839, 1140291) Graphics: Fixed crash when using shader with Error and enabling SRP batcher + Async loading new scenes. (1132280, 1143390) Graphics: Fixed tessellation shader codegen. (1120434, 1127872) IL2CPP: Fixed SocketException not thrown when a socket attempts to bind to an address and port that is already in use by another socket on Windows. (1133587, 1134332) IL2CPP: Improved the performance of Socket Poll operations on iOS. (1136660, 1139296) IL2CPP: Improved the performance of WaitAny on non-Windows platforms. (1111339, 1122745) iOS: Fixed Gamepad.startButton not working correctly. Note: on iOS startButton (or Menu button) only reports down event, there's no up event, that's why we're simulating up event right after down event to simulate a click. iOS: Fixed occasional crash when destroying WebCamTexture. (1090053, 1108709) iOS: Fixed return Pen.tilt values not being correctly returned in the new input system. iOS: Fixed SendMessage not working on iOS/tvOS simulator. (1138257, 1139344) Mobile: Fixed Application.Unload crashing on il2cpp. (1135018, 1140281) Mobile: Fixed rare race condition when warming up shaders. (1115274, 1134733) Mono: Prevented a TypeLoadException which can occur in the player at run time when the Medium or High managed stripping level is used with the Mono scripting backend. (1121958, 1135047) Particles: Fixed an issue where 1 particle of each mesh type would be rendered, when a system did not actually have any particles using that mesh, when using GPU Instancing. (1139308, 1140353) Particles: Fixed crash when resetting Particle System component via the cog wheel in the Editor. (1131320, 1140344) Particles: Fixed FrameDebugger corruption when using Particle Systems. (1136275, 1140341) Particles: Fixed particles not being removed from the system when using SetParticles with zero remaining lifetime. (1130703, 1140343) Particles: Fixed per-particle sorting work when using GPU Instanced particles. (1139313, 1140350) Particles: Fixed texture property issue with Trails texture and Sprite mode being used together. (1127679, 1140346) Scene Management: Fixed API and UI incorrectly reporting added/removed object overrides on disconnected Prefab instances. (1115758, 1134971) Scene Management: Fixed driven properties causing RectTransform to temporarily show up in Overrides dropdown after reverting or applying anything in the Comparison view. (1131854, 1135007) Scene Management: Fixed Prefab instances with driven properties showing Apply All and Revert All buttons in Overrides dropdown even when there are no overrides. (1131854, 1135007) RectTransform properties of nested Canvas getting incorrectly modified upon entering Prefab Mode. (1132796, 1144694) Scripting: Improved performance when calling GameObject.AddComponent for a nested MonoBehaviour class inside a namespace. (1085304, 1118037) errors that popup in editor when loading a timeline with exposedreferences from a instanced prefab loaded from an assetbundle. (1120186, 1129365) 变更集: 06548a9e9582
https://unity3d.com/cn/unity/whats-new/2018.3.13
CC-MAIN-2021-31
refinedweb
630
57.37
From: "Conor MacNeill" <conor@cortexebusiness.com.au> > Steve Loughran wrote: > >. > > > > Let's agree on the concept of roles firstly then :-) > +1 > I have not implemented roles in mutant as I have always found the > descriptions relatively complex. What are the goals of defining and > using roles? As far as I can see this is to allow extensibility, > primarily the ability to define new nested elements for a task without > changing the task. Mutant supports this capability in, IMHO, a simpler > way, although it is perhaps less appealing syntactically. > To jump ahead, I really dislike Mutant extensibility approach. I wouldn't mind it as a way to resolve ambiguities, but to be the only notation for doing it, that I really dislike. We are talking not only about exotic things like <ejbjar> but also basic things like <condition>, <mappers>, etc. Needing to use Mutant's notation all over the place, defeates the hole purpose. > To understand the propsals being put forth to support roles, I have > studied Jose Alberto's sandbox proposal. Jose Alberto, could I ask that > you do not use tabs. You may also consider using a tool like checkstyle > to verify your Javadoc. Anyway, this is my understanding of how roles work. > I will do that when things are a little more stable, with so many ideas comming from you guys, I am still changing things around. Just as an initial note to my comments bellow, let me just remind people that my proposal is constraint by having to maintain the semantic of the code currently in ANT1, many of the ordering desicions have to do with the fact that in ANT1 that is the order in which things are done. > Lets say we are presented with the following construction > <foo> > <bar> > > where foo is an instance of Foo class and we are trying to create the > nested element bar. It appears that before the standard addBar, > createBar methods are checked, the code will attempt to use the role > framework to create the nested element. I think myrmidon does this in > the other order. > Yes, this is because in ANT1, for tasks (role TaskContainer) the method "addTask" will not be taken into account as an introspector method. In the generalization this means that roles take precedence over Introspected methods. > ProjectHelper calls IntrospectionHelper to create the nested element. > > IntrospectionHelper calls Project.createInRole. > > Project.createInRole calls SymbolTable.findRoles to find all Roles which > are implemented by Foo.class. This works by iterating through all the > roles in the system and checking if Foo implements the Role's > corresponding interface. findRoles returns a list of the satisfying Role > names. > > Let's say there are three roles - thing, widget, dunsel - returned. > Notice that in general, I think, the number of roles will be small, and that in particular very few roles will apply at any given point. > For each of these roles, Project calls createForRole, passing the name > of the role and the name of the nested element (bar in this case). For > each role, it attempts to find a factory calling SymbolTable.get (I > think this should be renamed getFactory). The factory object is used to > create the instance. If more than one factory is found, an error is > thrown. Lets say Foo implements Thingable. "bar" must have been > registered as supporting the Thingable role (not that it implements > Thingable) > Good point about the getFactory(). I would say that bar must define a type appropriate for a Thingable container. I english here is not easy since what in escense we need to say is that the type of <bar> as defined on role Thingable, must be a type compatible argument for the method of the thingable interface. > The symbolTable object is consulted each time to get the interface > method corresponding to the role. If only one instance is created, the > interface method is invoked on foo with the instance. > > foo.addThing(bar); > This is a fair description. Notice that the reason it is the way it is, is to make it compatible with ANT1 and in particular the way in which ANT1 defines containers of tasks (i.e., interface TaskContainer). This is why the role is defined by the interface implemented by the role. Additionally in myrmidom, if I remember correctly, there is a requirement that the methods for roles, must have interfaces as the type of the arguments, but since ANT1 uses classes instead of interfaces to define almost everything this approach would have problems in ANT1. > If I understand Peter's description of roles in myrmidon (from 7th Jan > email, I also had a quick look at the code but it wasn't clear to me), > it is pretty similar to the above, although the role is associated with > the interface implemented by the nested instance rather than by the > "container" class. IOW, rather than searching for every role interface > implemented by Foo.class, myrmidon searches for interfaces based on the > existence of add(X x) methods. It determines the roles by looking up the > role that corresponds to class X. > > If we boil down what roles do, I see it as > 1. determines the type of the instance that should be created > 2. determines the method which should be called to add that instance > > I took a slightly different approach to that problem in mutant. The > syntax I use is > <foo> > <thing ant: > As said before, I really dislike the syntax in Mutant in particular because with a syntax like that there is no need for the "ant:type" attribute, one could get the same result with similar notation using the current ANT1. Bt for some reason people dislike it (that is why we keep on having things like <ejbjar>, <condition> etc. If your syntax where instead of the form: <foo> <bar ant: Then at least I could see this syntax as a way to disambiguate things. But I would still like to make such attribute optional. > So, the nested element name identifies the method to be called > (addThing()) and the data type is given by the ant:type attribute. The > name, "bar" is a standard Ant datatype registered with typedef, for > example. I am using the ant namespace for the necessary metadata. I also > use it to identify reference instances (ant:refid), I think myrmidon > adds "-ref" to the element name. > > To give an example using real Ant types. > <foo> > <fileset ant: > > Mutant's approach is pretty simple and also quite explicit. The build > file writer asks explicitly for the type they want, rather than the > background plumbing figuring it out. The roles approach has always > seemed a little complicated to me. I guess the downside in my approach > is that the syntax is a little ugly. > Yes ugly, too ugly, uglier that a box of chocolates :-) > Anyway, I'd appreciate your thoughts. If everybody thinks roles are the > way to go, I guess I'll have to handle the complexity. Also it would > probably be worth harmonizing on which interface we are going to > identify with the role. > Whatever we decide, we need to take into account the restriction of not breaking ANT1 ad the way it does things. In perticular this means that mutant and myrmidom aproaches will not work because the syntax is incompatible with the tasks already inexistance. That does not mean that my should be taken as is, I am open for comments. Jose Alberto -- To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
https://mail-archives.eu.apache.org/mod_mbox/ant-dev/200202.mbox/%3C00fb01c1b9ad$8ec28cc0$0100a8c0@jose%3E
CC-MAIN-2021-39
refinedweb
1,256
62.78
Opened 7 years ago Closed 7 years ago Last modified 7 years ago #10096 closed (wontfix) Added DBMS LINTER backend Description Added backend for DBMS LINTER in list of backends. New files: - django\db\backends\linter\init.py - django\db\backends\linter\base.py - django\db\backends\linter\client.py - django\db\backends\linter\creation.py - django\db\backends\linter\introspection.py - django\db\backends\linter\query.py Changed files: - docs\faq\install.txt - docs\ref\databases.txt - docs\ref\settings.txt - docs\topics\install.txt Attachments (1) Change History (4) Changed 7 years ago by chernyshov comment:1 Changed 7 years ago by mtredinnick - Resolution set to wontfix - Status changed from new to closed Nice work! :-) However... our policy is not to add new database backends until they are well established with a history of continued support. Database backends can be used without requiring them to be included in core, (this is the fastest link I can find for the moment). It's a big problem if we include something and then the maintainer(s) lose interest, or aren't responsive or the quality just isn't there. Everybody looks bad, and the users become disappointed when something that looks like it's supported turns out not to be. We've had this problem in the past and are much more careful now. So, the best idea is to create your backend as a standalone package and make it available via a service such as Google Code. That way there's a place for people to report bugs and everything. You won't need to patch any Django code to this; put the backend in its own module namespace and people can set the DATABASE_ENGINE setting appropriately. If, at some point in the future, there's a lot of usage of this code and a history of support and bug fixes, you might wish to propose it for inclusion in core. But that is a last step, not a first one. Closing as wontfix for the above reasons. It's not something to include in core at the moment. You might well want to make an announcement about it on django-users, though. There could well be people interested in trying this out. comment:2 follow-up: ↓ 3 Changed 7 years ago by anonymous =( linter is need russia security systems. No linter - no django=( comment:3 in reply to: ↑ 2 Changed 7 years ago by ubernostrum =( linter is need russia security systems. No linter - no django=( Please read Malcolm's explanation again: there is no need for this backend to be directly distributed as a part of the core Django framework, because Django can use any class implementing the proper API as a backend, regardless of whether that class is a part of Django or is a third-party module installed somewhere else. Therefore, you can take the code above, put it into your own package, point your DATABASE_BACKEND setting at it, and do whatever you need with Django. added LINTER backend for Django
https://code.djangoproject.com/ticket/10096
CC-MAIN-2016-22
refinedweb
505
60.24
CUFP McD I dig <a href="">SpamAssassin</a> - a Perl based spam filtering suite.<p> It's somewhat biased towards a Unix flavored mail server soloution though, and I read my mail at home on Win32 with a POP3 mail client. Itch, meet scratch.<p> This is a simple POP3 protocol proxy which will intercept mail messages en route from a POP3 server and wash them thru SpamAssassin before returning them to the mail client (Mozilla, Outlook, Eudora, etc). It's feature rich (for what it <i>is</i> - it's pretty simple at the end of the day) but it does require some know-how to get set up. There's also a slew of cool SpamAssassin features that it doesn't support, but might someday. It was written and debugged with Win32 use in mind, but I suspect the code is pretty portable to any modern Perl install - let me know if you have success on a non-Win32 platform with it!<p> There's a manpage with detailed instructions on installing, configuring, and using this proxy <a href="">here</a>, and here's a nice zip with the script and related goodies: <a href="">pop3proxy.zip</a><p> The code is posted below - it's the same as the zip, but I've removed the POD directives for brevity's sake. See the manpage if you plan on setting about using this.<p> Peace,<br> -McD<p> <READMORE> <code> #!perl -w # Pop3proxy - a SpamAssassin enabled POP3 proxy designed for Win32 # users. use strict; # Set this to zero to turn off all debugging statements. Set to 1 for # basic debugging, which is pretty verbose, set it to 2 to add a dump # of key data structs on connect, set it to 3 to add a dump of every # read/write we do. (Oy) use constant DEBUGGING => 1; # Seems that SpamAssassin wants to remove the dependency on # Time::HiRes. I only need it for measuring performance, so I'll only # include it if it's available. Have to eval the "use constant" # statements to avoid redefinition warnings. # # I use constants for debugging switches because I believe they get # optimized out by the compiler if they're false. I could be wrong. BEGIN { eval "use Time::HiRes"; if ($@) { eval "use constant TIMERS => 0"; } else { eval "use constant TIMERS => 1"; } } # A set of enumerated reasons why we're snarfing a multiline response # for a socket. use constant RETR => 1; use constant TOP => 2; use constant CAPA => 3; use IO::Socket; use IO::Select; use FindBin; use Mail::SpamAssassin; use Getopt::Long; ######################### # A BUNCH OF EVIL GLOBALS ######################### # Set this to be a file that will contain the debug log. Set to an # empty string to debug to STDOUT. --logfile command line arg sets. my $logfile = 'pop3proxy.log'; # Hostmap - keys are ports to listen to on localhost, values are # hostname:port to proxy connections on the key port to. Set up by # the command line --host arg or by the hostmap.txt config file. # # A simplest case - you get your mail from a server server named # pophost.isp.com, on the standard POP3 port (110): # # my %hostmap = ( 110 => 'pophost.isp.com:110' ); # # ...And you change your mail client to get mail from localhost. # # Fancier case - you pop mail off of two hosts, pophost.isp.com and # mail.yetanother.org: # # my %hostmap = ( # 817 => 'pophost.isp.com:110', # 818 => 'mail.yetanother.org:110', # ) # # In that case, the proxy listens to TWO sockets on localhost - 817, # and 818, proxying off to two separate remote hosts as indicated. # # Note that for this to work, you need to be able to tell your mail # client to connect to two different ports on localhost to find the # proxy - namely, 817 for pophost.isp.com and 818 for # mail.yetanother.org. Some mail clients, like Netscape 4.5's, won't # let you specify the port to use for a pop3 connection. Oops. # Others, like Mozilla 1.0, will let you set the port, but won't allow # two servers to be on the same host (localhost in this case). You # can work around THAT by creating another alias for localhost in your # C:\Windows\Hosts file: # # 127.0.0.1 localhost MyHostName # # ...and then configuring one account for localhost:817 and the other # for MyHostName:818 my %hostmap = (); # Respect_byte_count - If TRUE, then we do not alter the byte count of # the message when marking it as spam - instead, we overwrite portions # of the headers, such as changing the first five characters of the # Subject: line to "*SPAM*" (a shortened form of SpamAssassin's famous # subject prefix). Set by the command line --nopad arguement. # # This, because under certain conditions the POP3 protocol indicates # message and mailbox sizes, and the safe thing is not to enlarge # those sizes while marking a message as spam. # # If there is no Subject: line in the mail headers (there doesn't have # to be, after all) or if it's less than 5 bytes, then we use the # first Received: line we find instead. # # Setting this value to FALSE (0) seems to work with most mail # clients, and it causes us to proxy back the mail as it's been # modified by SpamAssassin, which gives you a wonderful great lot of # info about WHY it's labeled as spam, and also labels it clearly and # beyond doubt, and defangs the MIME contents, etc, etc - but it # *could* break the mail client. Harumph. my $respect_byte_count = 0; # If true, we let the POP3 "TOP" command go thru to the server, # otherwise, we don't proxy the TOP command and return an error back # to the client. Set by the command line --allowtop arguement. # # TOP is specified as an optional command, it shows you the headers of # a mail message and a configurable number of lines of the body. The # idea is that you can sort of "screen" what you choose to download or # not before you do. All well and good, but our spam filtering can # cause this to break when we scan the actual message during retrieval # and potentially modify or add to the headers, such as changing the # subject line to start with *****SPAM***** or something. # # This breaks the protocol a little and could have unusual or possibly # even destructive consequences. Since it's an optional part of the # protocol, most mail clients should be coded to work without it, # hence, by default, we avoid the problem by turning it off. my $allow_top = 0; # Here's the problem with using SpamAssassin in this way - given a # large enough message, he will take a LONG time to scan it, where # long is like sixteen minutes on a P-II 350 running Linux for a 3MB # text message. Maybe that was a degenerate case of some sort, but # there it is. If SpamAssassin takes long enough to scan a message, # the mail client (who's not getting any data in response to his RETR # command during all this) will eventually time out. Sockets close, # data is lost, etc, etc. Very bad, very difficult to fix and get on # with your life if you have a large mail message on the server that # keeps causing this. # # Hence, this config parameter. If a message exceeds this size while # we're snarfing it, we'll abandon the snarf, start passing the data # back to the client, and no scan of the message by SpamAssassin will # be performed. # # Setting this to zero turns this behavior off - all messages will be # scanned, regardless of size. # # I chose a 250K default for this value after analyzing a few months # worth of spam - 1500 messages. The average size was about 9K, the # largest was 110K. I figured double the largest would allow most of # the spam we see today to get scanned, without trouble. # # This has the added side effect of keeping our memory usage down - # that scan of a 3MB message took 86MB worth of memory. That's not # such a hot idea for a daemon. my $max_scan_size = 250000; # If we're invoked with a logfile for output using ActiveState's # wperl.exe, we can effectively hum along in the background. Nice. I # don't want to send the user to Task Manager to shut us down, and # under Win98 at least you get the nasty "application not responding" # dialog box because I'm busy waiting for to select a socket, so # instead we have this - a port that we listen on for the purposes of # exiting. Any connection to it from localhost, and I'll get out of # town. # # The default is 9625 (which is otherwise unused). Set this to zero # to disable this behavior. my $exit_port = 9625; # Note CRLF == \015\012 my $ $logfile") or die "Can't redirect stdout: $!"; open(STDERR, ">&STDOUT") or die "Can't dup stdout: $!"; } $| = 1; # The SpamAssassin scanner. # # We tell it to use ./user_prefs, and not to try to copy in a default # if it's not there (because he goes looking for a template file in # all the usual places to copy over - all the usual *UNIX* places). # We tell it to run only local tests, because otherwise, you'll get # complaints and timeouts when it can't find a dcc app to run or it # can't find a DNS server or the network is the wrong color or # whatever. In theory, you could get away with DNS RBL checks, but # I've had a hard time making Net::DNS work on my Win32 setup. # Hackers welcome, best of luck. See notes at end of file. my $spamtest = Mail::SpamAssassin->new({ userprefs_filename => './user_prefs', dont_copy_prefs => 1, local_tests_only => 1, }); my $readable = IO::Select->new; my $writeable = IO::Select->new; # Create sockets to listen on. foreach my $port (keys %hostmap) { my $listener = IO::Socket::INET->new(LocalPort => $port, Listen => 5, Reuse => 1); die "Can't create socket for listening: $!" unless $listener; print "Listening for connections on port $port (proxy $hostmap{$port})\n" if DEBUGGING; $readable->add($listener); $proxyto{$listener} = $hostmap{$port}; } # Create the "exit socket" - any connection on this socket from # localhost will cause us to exit. my $exit_socket; if ($exit_port) { $exit_socket = IO::Socket::INET->new(LocalPort => $exit_port, Listen => 1, Reuse => 1); $readable->add($exit_socket); } while(1) { my ($toread, $towrite) = IO::Select->select($readable, $writeable); foreach my $socket (@$toread) { if ($socket == $exit_socket) { all_done($socket); next; # Just in case it wasn't from localhost } # Is it a new connection? if (exists $proxyto{$socket}) { dump_data_structs() if (DEBUGGING > 1); # Open connection to remote, add to readable set, map it # to this new client connection. my $remote = IO::Socket::INET->new(PeerAddr=>$proxyto{$socket}); $readable->add($remote) if $remote; if (not $remote) { # Break the incoming new client off, create a new # listener to try again. print "Connect to remote: $proxyto{$socket} FAILED: $@\n" if DEBUGGING; my $port = $socket->sockport; $socket->close; $readable->remove($socket); my $listener = IO::Socket::INET->new(LocalPort => $port, Listen => 5, Reuse => 1); die "Can't create socket for listening: $!" unless $listener; $readable->add($listener); $proxyto{$listener} = $hostmap{$port}; next; } # Accept the connection and add it to our readable list. my $new_sock = $socket->accept; $readable->add($new_sock) if $new_sock; die "Can't create new socket for incoming connection: $!" unless $new_sock; # Create proxy/peer mapping, set client/server indicators, # create buffers, etc. $peer{$new_sock} = $remote; $peer{$remote} = $new_sock; $is_client{$new_sock} = 1; $is_client{$remote} = 0; $message{$new_sock} = ''; $snarfing{$new_sock} = 0; # The first thing we'll see is a response to no command at # all - "+OK Welcome to foobar.com" - so we seed the # command queue with a dummy command to eleminate warnings # later on. $client_commands{$new_sock} = [('none')]; foreach ($new_sock, $remote) { $reading_buf{$_} = ''; $writing_buf{$_} = ''; } if (DEBUGGING) { print "\nNew connection:\n"; print "From: ", $new_sock->peerhost, ':', $new_sock->peerport,"\n"; print "To: ", $remote->peerhost, ':', $remote->peerport, "\n"; } } else { # It's an established connection my $key; if (DEBUGGING) { if ($socket->connected) { $key = $socket->peerhost . ':' . $socket->peerport; } else { $key = "$socket"; } } my $proxy; # Which socket we're going to proxy this data to if (exists $peer{$socket}) { $proxy = $peer{$socket}; } else { # No peer. print "\n$key - peer gone on read" if DEBUGGING; # No need to keep hearing about how it's ready to be # read - we've got no use for subsequent data. $readable->remove($socket); # Tear down connection, unless there's data waiting to # be written to it - in that case, we'll catch it in # writeables and close it when we're done. if (! data_waiting($socket)) { print ", nothing to write, closing socket" if DEBUGGING; clean_up($socket); } print "\n" if DEBUGGING; next; } # Why 4096 bytes? I dunno. You got a better buffer size? unless (my $n = sysread($socket, $reading_buf{$proxy}, 4096, length($reading_buf{$proxy}))) { warn "sysread: $!\n" if not defined $n; # Shut down the socket print "\n$key - socket close on read" if DEBUGGING; clean_up($socket); # Remove the proxy map delete $peer{$socket}; delete $peer{$proxy}; if (! data_waiting($proxy)) { # No pending data - tear down the peer as well. print ", closing peer too" if DEBUGGING; clean_up($proxy); } print "\n" if DEBUGGING; next; } if (DEBUGGING > 2) { $is_client{$socket} ? print "C< " : print "S< "; print "\n"; } # Got data from a socket. Go do something clever with it. run_hooks($proxy); } } # End of readables # Next, do something with each socket ready to write. Like, write # to it. foreach my $socket (@$towrite) { my $key; if (DEBUGGING) { if ($socket->connected) { $key = $socket->peerhost . ':' . $socket->peerport; } else { $key = "$socket"; } } my $wrote = syswrite($socket, $writing_buf{$socket}) or do { warn "syswrite: $!\n"; print "\n$key - socket close on write" if DEBUGGING; clean_up($socket); # Remove the proxy map if (exists $peer{$socket}) { my $proxy = $peer{$socket}; delete $peer{$proxy}; delete $peer{$socket}; if (! data_waiting($proxy)) { print ", closing peer too" if DEBUGGING; clean_up($proxy); } } print "\n" if DEBUGGING; next; }; if (DEBUGGING > 2) { $is_client{$socket} ? print "C> " : print "S> "; print "\n"; } # Scrub the just-written data from the buffer substr($writing_buf{$socket}, 0, $wrote, ""); # All done writing? if (! length($writing_buf{$socket})) { $writeable->remove($socket); if (! exists $peer{$socket}) { # No peer? Tear down connection. print "\n$key - peer gone after write, closing\n" if DEBUGGING; clean_up($socket); next; } } } # end of writeables } # data_waiting($socket) # # Returns true if there's any data waiting to be proxy'd to this socket. # # Reason this works - we only check data_waiting() on a socket *after* # we've closed it's peer. Closing the peer in clean_up(), below, will # have the effect of flushing any pending %message buffers (and # %reading_buf, for that matter) to %writing_buf, and hence, all the # data which is "waiting" is, in fact, guaranteed to now be waiting. sub data_waiting { my $socket = shift; return (length($reading_buf{$socket}) or length($writing_buf{$socket})); } # clean_up($socket) # # Given a socket, close it, stop selecting it for anything, clean up # all our structs that refer to it, set the peer if any to flush # buffers. sub clean_up { my $socket = shift; # This socket is history. If there's a peer, then that peer # currently has all the data it's ever gonna get. Flush that data # into the writing_buf and add it to the writeable set. # # Ok, technically, this *could* burn you if what you were caching # away in %message was a multiline TOP response that you were # going to discard anyway, and now I'm going to flush it to the # client, instead. Look, the client is going to get an error # condition *anyway* because the darn socket is GONE, man, just # like that, in the middle of a multiline response! I will # venture to say that no harm will come of this - but if it does, # we can always make this behave a lot more like a "last ditch" # run_hooks() session. if (exists $peer{$socket}) { my $proxy = $peer{$socket}; $writing_buf{$proxy} .= $message{$proxy} if exists ($message{$proxy}); $writing_buf{$proxy} .= $reading_buf{$proxy}; $reading_buf{$proxy} = ''; $message{$proxy} = ''; $snarfing{$proxy} = 0; if (length ($writing_buf{$proxy})) { $writeable->add($proxy); print "\nFlushing peer on close\n" if DEBUGGING; } } # Note that you can apparently remove a socket more than once from # an IO::Select set. Also you can delete a key/value pair from a # hash that doesn't exist. Love Perl. DWIM. $readable->remove($socket); $writeable->remove($socket); $socket->close; delete $reading_buf{$socket}; delete $writing_buf{$socket}; delete $is_client{$socket}; delete $snarfing{$socket}; delete $message{$socket}; delete $client_commands{$socket}; delete $reading_multiline_response{$socket}; delete $snarf_start{$socket}; } # run_hooks($socket) # # This is where we hook the POP3 protocol. Called whenever a socket # gets new data in it's buffer, we can do whatever you want here. The # default is to wait until there's a \n in the %reading_buf buffer, then (in # a loop) move all those bytes into the %writing_buf buffer (giving us the # window to look at a full line of I/O), then add the socket to the # writeable set, thereby causing the contents of %writing_buf to get # flushed to the socket. # # Under certain conditions, though, we'll want to intercept the # protocol, at which point we snarf the data off into %message until # it's done, then we look at it or replace it or something, and THEN # we ship it off to %writing_buf for flushing to the client. # # Client commands are pushed onto a queue of commands, server # responses shift commands off that queue. This way we can support # pipelining client/servers, per rfc 2449 # # Note - logically, the %peer mapping must be intact when you get # here. The main loop enforces this. You may assume that # $peer{$socket} will exist and be valid in this routine. my $pos; sub run_hooks { my $socket = shift; # This loop looks for the first occurance of a \n in a string, # then MOVES all of the string up to and including the \n into the # output buffer and adds the socket to the set of sockets we'd # like to write to. Then it loops looking for another \n. # # Just before the move, you can examine the beginning of # $reading_buf{$socket} to see what kinds of interesting thingies might # be in there, in the confidence that it's a real full line of # data from the protocol. You can say things like: # # $reading_buf{$socket} =~ /^(.*)$/m # /m lets $ match next to embedded \n $pos = -1; while (($pos = index($reading_buf{$socket}, "\012", 0)) > -1) { # Right here you can examine $reading_buf{$socket} if ($is_client{$socket}) { # Hooks here for data from the server to the client # Responses from the server are interesting. They can be # single line, in which case they MUST start with "+OK" or # "-ERR", or else they're part of a multiline response, # such as a LIST or RETR command, in which case they MUST # end with a CRLF.CRLF. if ($reading_buf{$socket} =~ /^(\+OK|-ERR)/i and not $reading_multiline_response{$socket}) { # Response to a command my $command = shift @{$client_commands{$socket}}; print $peer{$socket}->peerhost . ':' . $peer{$socket}->peerport . " (Server) said $1 to $command\n" if DEBUGGING; # Always include the greeting line in the log. if (DEBUGGING and $command eq 'none') { print $reading_buf{$socket}; } die "Assertion failed: snarfing outside multiline response" if ($snarfing{$socket}); # Only interested in snarfing successful response - # none of the error responses are multiline. if (substr ($1, 0, 1) eq '+') { if ($command =~ /^TOP$/i and not $allow_top) { print "Snarfing TOP response\n" if DEBUGGING; $snarfing{$socket} = TOP; } if ($command =~ /RETR/i) { print "Snarfing RETR response\n" if DEBUGGING; $snarf_start{$socket} = Time::HiRes::gettimeofday if TIMERS; $snarfing{$socket} = RETR; } if ($command =~ /CAPA/i) { print "Snarfing CAPA response\n" if DEBUGGING; $snarfing{$socket} = CAPA; } } } elsif ($reading_buf{$socket} =~ m|^\.\015?\012|) { # End of a multiline response $reading_multiline_response{$socket} = 0; if ($snarfing{$socket}) { print "Detected end of snarfed multiline\n" if DEBUGGING; printf "Download took %.8f seconds\n", Time::HiRes::gettimeofday - $snarf_start{$socket} if (DEBUGGING and TIMERS); # At this point, $message{$socket} contains the # full multiline response, +OK up to but not # including this trailing ".CRLF". if ($snarfing{$socket} == RETR) { # Right here, $message{$socket} is ripe for # scanning. scan_mail(\$message{$socket}); $writing_buf{$socket} .= $message{$socket}; } elsif ($snarfing{$socket} == TOP) { # Eat the .CRLF, add the error message to the # output buffer, flush said output buffer, # clean up your structs and move on. substr($reading_buf{$socket}, 0, $pos+1, ""); $writing_buf{$socket} .= $no_top; $message{$socket} = ''; $snarfing{$socket} = 0; $writeable->add($socket); next; } elsif ($snarfing{$socket} == CAPA) { # Strips out the TOP response, if any. $message{$socket} =~ s/\012TOP[^\012]*\012/\012/ig if not $allow_top; # Strips out the SASL response, if any. $message{$socket} =~ s/\012SASL[^\012]*\012/\012/ig; $writing_buf{$socket} .= $message{$socket}; } $message{$socket} = ''; $snarfing{$socket} = 0; } } else { # Part of a multiline response. Flip the ready flag, # you won't be ready to see another response until you # see your CRLF.CRLF $reading_multiline_response{$socket} = 1; } # At this point, snarf data into %message if snarfing and # move along. if ($snarfing{$socket}) { $message{$socket} .= substr($reading_buf{$socket}, 0, $pos+1, ""); # Check size of snarfed message and stop snarfing if it's # getting too big - see notes at $max_scan_size. if ($max_scan_size != 0 and length($message{$socket}) > $max_scan_size) { print "Message exceeding max scan size, abandoning snarf\n" if DEBUGGING; $writing_buf{$socket} .= $message{$socket}; $message{$socket} = ''; $snarfing{$socket} = 0; $writeable->add($socket); } next; } } else { # Hooks here for data from the client to the server # Spot the client's command, add to the queue. my ($command) = $reading_buf{$socket} =~ /^(\S+)\s/; print $peer{$socket}->peerhost . ':' . $peer{$socket}->peerport . " (Client) said $command\n" if DEBUGGING and $command; # AUTH is a special case, see discussion elsewhere. Must # not have any commands in the queue, and we reply back to # the socket immediately with an error. if ($command and $command =~ /^AUTH$/i) { if (scalar(@{$client_commands{$peer{$socket}}})) { die "I so can't cope with AUTH commands while pipelining"; } print "AUTH Rejected\n" if DEBUGGING; substr($reading_buf{$socket}, 0, $pos+1, ""); # Note - $no_top is a generic -ERR response, works fine. $writing_buf{$peer{$socket}} .= $no_top; $writeable->add($peer{$socket}); next; } push (@{$client_commands{$peer{$socket}}}, $command) if $command; } # Default action after all your shots at hooking and magic, # etc.: Move the data to the writing buffer, and set it up to # get written. $writing_buf{$socket} .= substr($reading_buf{$socket}, 0, $pos+1, ""); $writeable->add($socket); } } sub dump_data_structs { # Dump your current key per-connection data structs print "\nExisting proxy/peer mappings:\n"; print map "$_ => $peer{$_}\n", keys %peer; print "\nExisting is_client flags:\n"; print map "$_ => $is_client{$_}\n", keys %is_client; print "Existing socket reading_buf buffers:\n"; print map "$_ => $reading_buf{$_}\n", keys %reading_buf; print "Existing socket writing_buf buffers:\n"; print map "$_ => $writing_buf{$_}\n", keys %writing_buf; print "Existing message buffers:\n"; print map "$_ => $message{$_}\n", keys %message; print "Existing snarfing flags:\n"; print map "$_ => $snarfing{$_}\n", keys %snarfing; print "Existing command queues:\n"; print map "$_ => @{$client_commands{$_}}\n", keys %client_commands; print "Existing reading_multiline_response flags:\n"; print map "$_ => $reading_multiline_response{$_}\n", keys %reading_multiline_response; print "Existing snarf_start values:\n"; print map "$_ => $snarf_start{$_}\n", keys %snarf_start; } # @mail - array of lines of a mail message. Some notes on memory # usage here: # # Big mail messages getting copied about will chew up memory right # quick. I start with one copy of the message built up in a scalar # buffer, then I need a second copy, broken out into an array of # lines, for Mail::SpamAssassin::NoMailAudit to chew on. That's two # copies. # # I can save a copy's worth of memory by MOVING the lines from the # scalar buffer into the array - but then, once SpamAssassin is done # chewing on them, I have to put them BACK into the scalar buffer. If # I'm not removing them from the SpamAssassin::NoMailAudit object as I # do that, I'm going to wind up with a second copy of the mail # *anyway*. And that kind of removal is nasty and creeps inside of # the objects encapsulation, where I really ought not go. # # NoMailAudit::as_string() returns a copy of the mail as a string, but # to do so, it creates a big ol' scalar on the stack to return. # Simple, but it costs a THIRD chunk of memory the size of the # message. my @mail; sub scan_mail { my $mailref = shift; my $bytecount = length $$mailref; $$mailref =~ s/\012\.\./\012\./g; # un-byte-stuff @mail = split /^/, $$mailref; my $response = shift @mail; # SpamAssassin::NoMailAudit adds a Unix mbox From_ line, unless # you construct your NoMailAudit message with the (ahem, # undocumented) add_From_line param set to false. That From_ # kinda breaks the protocol - the client isn't expecting mbox, # he's expecting raw 822 mail - so we leave it out. my $message = Mail::SpamAssassin::NoMailAudit->new(data => \@mail, add_From_line => 0); my $start; $start = Time::HiRes::gettimeofday if TIMERS; my $status = $spamtest->check($message); printf "Spam check took %.8f seconds\n", Time::HiRes::gettimeofday - $start if (DEBUGGING and TIMERS); my $id = $message->get('Message-id') || '*none*'; print "$bytecount bytes, ", $status->is_spam() ? 'SPAM' : 'NOT spam', ", Message-id: $id\n" if DEBUGGING; print $status->get_report() if DEBUGGING and $respect_byte_count; $status->rewrite_mail() unless $respect_byte_count; if ($status->is_spam ()) { if ($respect_byte_count) { # DAN - danger, you don't know if you're in the headers or not. $$mailref =~ s/\012Subject: [^\012]{6}/\012Subject: *SPAM*/i or $$mailref =~ s/\012Received: [^\012]{6}/\012Received: *SPAM*/i; } else { # What as_string() does as of SpamAssassin v2.31: # return join ('', $self->get_all_headers()) . "\n" . # join ('', @{$self->get_body()}); $; } } else { if (not $respect_byte_count) { $; } } $status->finish(); $$mailref =~ s/\012\./\012\.\./g; # byte-stuff } sub all_done { my $socket = shift; my $new_sock = $socket->accept; if ($new_sock->peerhost eq '127.0.0.1') { print "Connection on exit socket, exiting\n" if DEBUGGING; exit; } else { print "Connection on exit socket from non-local host!\n" if DEBUGGING; $new_sock->close; } } sub read_config { open (CONFIG, "./hostmap.txt") or die "Can't read hostmap.txt: $!\n"; # Straight from the cookbook 8.16 while (<CONFIG>) { chomp; # no newline s/#.*//; # no comments s/^\s+//; # no leading white s/\s+$//; # no trailing white next unless length; # anything left? my ($port, $proxyto) = split(/\s*=\s*/, $_, 2); $hostmap{$port} = $proxyto; } } sub usage { print <<EOT; Usage: $0 --host some.host.name [options] Options include: --logfile filename Use filename as the log file. Default is pop3proxy.log. If the filename is omitted, log to STDOUT. --nopad If nopad is specified, then message sizes will not be changed as a result of spam scanning. The default is to add to the message size. --allowtop If top is specified, then the POP3 "TOP" command will be passed through to the server. The default is to reject client TOP commands with an error message. --maxscan bytes Messages which exceed this size will not be scanned for spam. The default is 250000. Setting this to zero disables this behavior. --exitport port Any connection from localhost on this port will cause us to exit. The default is 9625. Setting this to zero disables this behavior. EOT exit; } # Copyright (c) 2002, Dan McDonald. All Rights Reserved. # This program is free software. It may be used, redistributed # and/or modified under the terms of the Perl Artistic License # (see) </code>
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=193065
CC-MAIN-2013-20
refinedweb
4,433
61.46
(For more resources on Flash, see here.) Customizing your avatar A Flash virtual world is a social community in which players interact with each other and have their own identity. Virtual world usually lets a user decide the avatar’s appearance by choosing the combination of different styles and colors. Customizing different styles Each part of the avatar will have different styles and shapes to form different combinations of the appearance of the avatar. Thanks to the timeline and movie clip features in Flash, we can put different styles of each part within the movie clip. For example, the following screenshot shows the head movie clip with different head styles placed frame by frame and we can use gotoAndStop to display the style we want. Customizing the color ActionScript supports changing the color transform for a given movie clip. It supports not only color tint but also applying color filter and detailed RGB transformation. We will use the simple color tint to change the color of the avatar. As the color transform is applying to the whole movie clip, we cannot simply tint the avatar movie clip because that will make the whole avatar tint to one solid color. In order to tint a partial part of the movie clip, we specifically create a movie clip in each part and name it color_area. We later program the ActionScript to change all movie clip names with color_area to the customized color. Adding customization to avatar class We are going to change the style and color by ActionScript in avatar class. We need to import the ColorTransform class in flash.geom package to change the color with ActionScript. import flash.geom.ColorTransform; We need several instance variables to hold the styles and color state. public const totalStyles:Number = 3; public var currentColor:Number = 0x704F4C; public var currentStyle:Number = 1; We wrap the whole block of color transform code into one function. The color transform adds RGB color transformation to the target movie clip. We only use colorTransform to tint the color here but it also supports percentage transform that adds partial color to the target movie clip. We will apply the color transform to the color area inside the head of the avatar in 4 directions. public function changeColor(newColor:Number = 0x000000):void { currentColor = newColor; for each(var avatar:MovieClip in _directionArray){ var avatarColor:ColorTransform = new ColorTransform(); avatarColor.color = newColor; avatar.head.color_area.transform.colorTransform = avatarColor; } } We modified the color by using color transform and used timeline to style the avatar style. Every frame in the head movie clip represents a style with its color tint area. We display the new style by changing the current frame of the avatar movie clip. It is also necessary to change the color again after switching the style because every style contains its own color area. public function changeStyle(styleNumber:int):void { for each(var avatar:MovieClip in _directionArray){ /* display the giving style in all parts of avatar*/ avatar.head.gotoAndStop(styleNumber); avatar.body.gotoAndStop(styleNumber); avatar.lefthand.gotoAndStop(styleNumber); avatar.righthand.gotoAndStop(styleNumber); /* need to apply the color again after changing the style */ var avatarColor:ColorTransform = new ColorTransform(); avatarColor.color = currentColor; avatar.head.color_area.transform.colorTransform = avatarColor; } currentStyle = styleNumber; } The purpose of the avatar class is to control the appearance of the avatar. We just implemented the direction, color, and style switching methods and it is now ready for customization panel to use. Designing a customization panel Avatars in virtual worlds and games often provide players with different kinds of customization. Some games allow users to customize the whole body with lots of options while some games may only provide two to three basic customizations. The layout design of the customization panel is often based on the number of options. There are two common customization panel layouts in the market. One layout displays arrows for a user to select next and previous styles. The other one displays a thumbnail view of the options within the same category. The arrows selection layout is suitable for an avatar that contains limited parts for customization. There may be only two to four categories and not many options in each category. Players can easily loop through different style combinations and choose their favorite one using this layout. The following avatar customization screenshot from the 2D Online RPG called Dragon Fable uses the arrows selection layout: The thumbnail view layout is suitable for avatars that can be highly customized. There are often many categories to customize and each category provides a lot of options for players to choose. Some virtual worlds even provide micro modification so that players can adjust details on the chosen style such as the distance between the eyes. Players do not need to iterate the large amount of styles and can quickly choose a style option among them with the thumbnail view. The following screenshot is an online Mii editor. Mii is the avatar system in the Nintendo Wii console. This is an online clone of the Mii avatar customization. It allows a large amount of avatar customization by the thumbnail view layout with extended features such as scaling and moving the elements.
https://hub.packtpub.com/customizing-avatar-flash-multiplayer-virtual-worlds/
CC-MAIN-2018-17
refinedweb
858
54.22
This chapter is taken from book "Programming Windows Phone 7" by Charles Petzold published by Microsoft press.. You can "stroke" these lines with a pen of a desired brush, width, and style. A series of connected lines and curves can also define an enclosed area that can be filled with a brush. A Silverlight program that needs to draw vector graphics uses classes defined in the System.Windows.Shapes namespace, commonly referred to as the Shapes library. This namespace consists of an abstract class named Shape and six sealed classes that derive from Shape. Canvas and Grid The Line class defines four properties of type double named X1, Y1, X2, and Y2. The line is drawn from the point (X1, Y1) to the point (X2, Y2) relative to its parent: Although the Canvas panel seems like a natural for vector graphics, you'll get the same image if you use a single-cell Grid: Normally when you use a Canvas you use the Canvas.Left and Canvas.Top attached properties to position elements within the Canvas. Those properties are not required with the Line because it has its own coordinates. You could use the attached properties with the Line but the values are compounded with the coordinates: Overlapping and ZIndex Here are two lines: The second one overlaps the first one. You can see that more clearly if you go beyond the default 1-pixel thickness of the line using StrokeThickness: If you would prefer that the blue line be on top of the red line, there are two ways you can do it. You could simply swap the order of the two lines in the Grid: Or, you could set the Canvas.ZIndex property. Although this property is defined by Canvas it works with any type of panel: Polylines and Custom Curves The Line element looks simple but the markup is a little bloated. You can actually reduce the markup for drawing a single line by switching from the Line to the Polyline: Now let's suppose you want to use Polyline to draw a circle. Commonly, a circle centered at the point (0, 0) with a radius R is defined as all points (x, y) that satisfy the equation: X2 + Y2 = R2 This is also, of course, the Pythagorean Formula. Let's create a new projecvt. Bring up the MainPage.cs file and install a handler for the Loaded event to allow accessing the dimensions of the ContentPanel grid. Here are calculations for center and radius for a circle to occupy the center of a content panel and reach to its edges: Point center = new Point(ContentPanel.ActualWidth / 2, ContentPanel.ActualHeight / 2 - 1); double radius = Math.Min(center.X - 1, center.Y - 1); Notice the pixel subtracted from the calculation of the radius. This is to prevent the circle from being geometrically the same as the content area size. The stroke thickness straddles the geometric line so it would otherwise get cropped off at the edges. Now create a Polyline and set the Stroke and StrokeThickness properties: Polyline polyline = new Polyline(); polyline.Stroke = this.Resources["PhoneForegroundBrush"] as Brush; polyline.StrokeThickness = (double)this.Resources["PhoneStrokeThickness"]; Calculate the Point objects in a for loop based on the formulas I've just showed you and add them to the Points collection of the polyline: for (double angle = 0; angle < 360; angle += 0.25) { double radians = Math.PI * angle / 180; double x = center.X + radius * Math.Cos(radians); double y = center.Y + radius * Math.Sin(radians); polyline.Points.Add(new Point(x, y)) } Now add the Polyline to the Grid: ContentPanel.Children.Add(polyline); And here's the result: let's do something a little different. Let's make the angle go all the way to 3600: Here's the complete class: namespace Spiral { public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); Loaded += OnLoaded; } void OnLoaded(object sender, RoutedEventArgs args) { Point center = new Point(ContentPanel.ActualWidth / 2, ContentPanel.ActualHeight / 2); double radius = Math.Min(center.X - 1, center.Y - 1); Polyline polyline = new Polyline(); polyline.Stroke = this.Resources["PhoneForegroundBrush"] as Brush; polyline.StrokeThickness = (double)this.Resources["PhoneStrokeThickness"]; for (double angle = 0; angle < 3600; angle += 0.25) { double scaledRadius = radius * angle / 3600; double radians = Math.PI * angle / 180; double x = center.X + scaledRadius * Math.Cos(radians); double y = center.Y + scaledRadius * Math.Sin(radians); polyline.Points.Add(new Point(x, y)); } ContentPanel.Children.Add(polyline); } } } We use that scaledRadius value for multiplying by the sine and cosine values. Now the result is an Archimedian spiral: Caps, Joins, and Dashes When you're displaying thick lines, you might want a little different appearance on the ends of the lines. These are known as line caps-"caps" like a hat. The available caps are members of the PenLineCap enumeration: Flat (the default), Square, Round, and Triangle. Set the StrokeStartLineCap property to one of these values for the cap at the beginning of the line, and set StrokeEndLineCap for the cap at the end. Here are Round and Triangle capping off a 30-pixel line: The difference between Flat and Square might not be obvious at first. To better clarify the difference, the following markup displays a thinner line over the thick line with the same coordinates to indicate the geometric start and end of the line: The Flat cap (at the upper left) cuts off the line at the geometric point. The Square extends the line for half the line thickness. My favorite caps are the rounded ones: As you can see, they also extend the rendered size of the line by half the stroke thickness. You can also specify what happens at the corners. Set the StrokeLineJoin property to a member of the PenLineJoin enumeration. Here's Round: The Miter join has a little built-in problem. If the lines meet at a very sharp angle, the miter can be very long. For example, a 10-pixel wide line that makes an angle of 1 degree will have a miter point over 500 pixels long! To avoid this type of weirdness a StrokeMiterLimit property kicks in for extreme cases: Here are two lines, one thick, one thin overlaying the thick line, with the same geometric points, going from the upper-left to the lower-left: You can make the line dashed by setting the StrokeDashArray, which is generally just two numbers, for example 1 and 1: If you want to draw a dotted line with actual round dots, obviously you want to use the Round dash cap, and you want each dot to be separated by its neighbor by the dot width. The StrokeDashArray required for this job is somewhat non-intuitive. It's a dash length of 0 and a gap length of 2: You can use a dotted line around an ellipse if you want: Polygon and Fill The Polyline that I've been using to demonstrate dotted lines is only three sides of a square: But if you set the Fill brush, the interior is filled as if the polyline describes a closed area: The NonZero fill rule is a bit more complex because it takes account of the directions that boundary lines are drawn. If the boundary lines drawn in one direction balance out the boundary lines drawn in the opposite direction, then the area is not filled. In any interior area of this star, however, all the boundary lines go in the same direction. Neither of these two FillRule options guarantees that all interior areas get filled. Here's a rather artificial figure that has an enclosed but unfilled area even with NonZero: The Stretch Property The only settable property defined by Shape that I haven't discussed yet is Stretch. This is similar to the same property in the Image element; you set it to a member of the Stretch enumeration, either None (the default), Fill, Uniform, or UniformToFill. Here's an innocent little Polygon with its Stretch property set to Fill. ©2014 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/8c85cf/two-dimensional-computer-graphics-or-vector-graphics-in-silverlight-for-windows-phone-7/
CC-MAIN-2014-42
refinedweb
1,334
63.09
Pulse sensors are a common feature of fitness monitors, used to track your activity and cardiac fitness over time. These external monitors use the reflection and absorption of bright green/infra-red light to detect the pulse wave travelling down the artery — a technique called photoplethysmography (PPG). This same techique is used in hospital finger-clip monitors. Wrist-worn devices like the Fitbit typically use green-light sensors, while the IPhone monitor uses a combination of green and infra-red light. The use of infra-red light for pulse monitoring has a longer history, and is the type of choice used in hospital monitors, because (in combination with a red LED) allows both heart rate and oxygen saturation sensing. In healthy individuals the oxygen saturation level shouldn't fall, so it's not particularly useful in a fitness trcker. How it works Haemoglobin, the oxygen-carrying component of red blood cells, is a strong reflector of red light, and a strong absorber of green light. Together these characteristics give oxygenated blood its red colour. The image below (adapted from Wikimedia) shows the relative absorption by blood of different wavelengths of light. Green, red and infra-red regions are highlighted. The differences in absorption between oxygenated (red) and deoxygenated (blue) blood can be used with paired red-IR LEDs to determine blood oxygenation percentage. See the MAX30100 sensor for an example. During a pulse there is a wave of increase blood pressure through the arteries, slightly stretching the elastic arterial walls and bringing a pulse of highly oxygenated blood. This change in arterial size and increase concentration of blood is what is exploited to detect the pulse with PPG. Green light is absorbed by red blood cells, but scattered by the tissues. Between pulses, scattered light will be reflected back out towards the incident light and can be detected. However, during a pulse, the small increase in blood volume leads to an increased absorption of the green light. This results in a reduction of reflected signal. We cab detect this increased absorbance by the reduction in reflection of green light. Red/infra-red light is reflected by red blood cells. Between pulses most light is transmitted and scattered into the tissues. During a pulse the small increase in blood volume leads to an increased reflection of light by red blood cells, and a reduction in transmission. We can detect this increased scattering of light either by the reduction in IR transmission, or alternatively an increase in reflection. Below we look at the main types and how to interface with them from different microcontrollers. You can also use one of these sensors to build a working heart monitor. Analogue Sensors (KY-039/Pulsesensor.com) Sensor types IR-phototransistor sensors (KY-039) like the Keyes KY-039 use a bright infrared (IR) LED and a photo transistor to detect the reduced IR transmission or increased IR reflection during the pulse. These sensors are more suited to transmission use because of their construction. Green-light phototransistors like the PulseSensor.com function in reverse usign reflected, rather than transmitted light to detect the pulse. This is done using a rather pleasant green LED, which when wired up looks a bit like a tripod from the War of the Worlds. By using green light these sensors detect the reduction in reflection due to blood absorbing the green light. The circuitry for these sensors is a little more sophisticated than a rawlight sensor, with automatic amplification and noise cancellation features. This makes the subsequent HR calculation a little simpler. The sensors are similar to those found in wearable heart rate/fitness devices. See this tutorial for an example of building a heart rate monitor using a Pulsesensor.com sensor. Reading analogue sensors The principal for reading both types of sensor is the same. With the sensor + connected to Vref (3.3V or 5V; depending on controller), and GND to GND, the measured value is readable via the S pin. The analogue output from this pin varies between GND (light completely blocked) and Vref (light completely passing). The variation caused by the pulse is a tiny fraction of that caused by putting your finger in front of the sensor, so calculating the pulse value requires a bit of code (see later). Raspberry Pi Wire the + on the sensor to +3.3v on your Pi. While the sensor can handle 5V itself, the input voltage sets the max output from the S pin. More than 3.3V will damage your Pi's GPIO. As the readings are analogue you will need a Analogue to Digital Converter (ADC) such as the MCP3008 between the sensor and your Pi. The Pi then communicates with the chip via SPI. Add the MCP3008 to a breadboard, with the notch facing up towards your breakout board. Using hardware SPI we can wire the MCP3008 up as follows: The analogue and digital ground pins are wired together here. This is fine since we aren't looking for highly accurate readings. Wire the 3.3V and GND pins up first, leaving the SCLK, MISO, MOSI and CEO pins to wire to your Pi. They connect to pins 11, 9, 10 and 8 respectively, with the first 3 in order up one side of the GPIO, and the last on the other side, level with the first pin you connected. Once connected you can use the gpiozero.MCP3008() interface to read values out directly. import gpiozero hr = gpiozero.MCP3008(channel=0) # Set the channel if you're using a different one >>> hr.value 0.7663 To check that the sensor is responding correctly, try writing the values out in a loop (using end="\r" keeps our output on a single line). while True: print("%4f" % hr.value, end="\r") If you put your finger between the emitter and sensor while running this you should see the number increase, indicating increased resistance as the light has to travel through your finger. Somewhere buried in the noisy variation you see is your heart beating. MicroPython (ESP8266 inc. Wemos D1) If you're running on a ESP8266 device you have access to an ADC pin, labelled as A0 on the Wemos D1. Once your sensor is wired to this pin you can use following MicroPython code to read the value from the sensor: import machine adc = machine.ADC(0) >>> adc.read() 550 Note: The values output by this ADC are integers in the range 0-1023. A reading of 550 is equivalent to 0.54 on a 0-1 scale. Below is a plot of data from the Pulsesensor.com sensor, as read using a Wemos D1. Notice that the Pulsesensor.com sensor auto-calibrates the reading range. BBC micro:bit The micro:bit can read analog values on pins 0, 1, 2, 3, 4 and 10. from microbit import * >>> pin0.read_analog() 550 Note: The values output by this ADC are integers in the range 0-1023. A reading of 550 is equivalent to 0.54 on a 0-1 scale. Digital sensors (MAX30100/RCWL-0530) Digital sensors based on MAX30100 chips offer both heart rate sending and oximetry (oxygen saturation measurements). These include 2 LEDs (IR and visible red light) to detect pulse and oxygen saturation levels respectively. Oximeters pair two light emitters and receivers working at slightly different wavelengths. Using the different light absorbance profiles of oxygenated and deoxygenated blood it is possible to determine the level of O2 in the blood in addition to the pulse. Reading digital sensors The MAX30100 chip datasheet provides an I2C interface to read the resulting data, which can be interfaced directly to any I2C supporting micro controller. It is available at I2C channel 0x57 (87). There was no simple interface available for connecting with a MAX30100 based sensor from a Raspberry Pi (the Intel IoT UMP project looks promising, but doesn't build on Pi as of September 2017). As a stop-gap I re-implemented a C library in Python. The source code is available. This currently works with Raspberry Pi only, although adapting to MicroPython should be as simple as writing an adaptor for the I2C API. You can create an interface to your device/chip by creating an instance of the MAX30100 class. import max30100 mx30 = max30100.MAX30100() To read the sensor use .read_sensor(). This stores the current measurement, making this value available under two properties. Historic values are in a queue. mx30.read_sensor() # The latest value is now available by .ir mx30.ir You can switch on O2 sensing as follows. ms30.enable_spo2() Subsequent calls to .read_sensor() will now populate the .red attribute. # The latest value is now available by .ir and .red mx30.ir, mx30.red For more information on using the max30100 package, including the O2 saturation sensor, see the documentation on Github. Calculating a heart rate Regardless of what sensor you use, you will end up with the same output — a stream of values, indicating transmission or reflection of a particular wavelength of light. There are a number of ways to calculate the heart rate from this data, but peak-counting is a simple and effective method. For a working example of how to do this see this Wemos D1 heart rate monitor.
https://www.twobitarcade.net/article/heartrate-sensor-modules/
CC-MAIN-2019-09
refinedweb
1,539
65.22
Thanks to the efforts of Christopher Molnar, KDE mailing lists are now accessible read-only through NNTP at news.uslinuxtraining.com (news, web). In other words, KDE Mailing Lists are now neatly accessible through your favourite newsreader (KNode, tin). The lists are also available through anonymous IMAP at the same server. See the official mailing list page for more details. In related news, Jono Bacon wrote in to point us to the new KDE::Enterprise Forum over at KDE::Enterprise. The forum provides a place for discussions regarding the use of KDE within businesses, education, charities, and so on. Enjoy. The best thing about this is that they've splitted up the kde cvs mailing list so you can watch the commits for your favourite project only I really hate reading mailing lists.. the news format is much better IMHO. (to bad it's read only) > I really hate reading mailing lists.. > the news format is much better IMHO. >(to bad it's read only) I am working on solving that. Please let me know which specific groups you would like to have post access on and I will try to get those first. -Chris KDE Page: > Anonymous IMAP is available at the same server. Set login to 'anonymous, password 'you@email.address', folder to '#news' and server to 'news.uslinuxtraining.com'. When I add #news to prefix directory in KMail it says: "Could not enter directory" imap://anonymous@news.uslinuxtraining.com:143/#news/;TYPE=LIST What am I doing wrong? (Insert new brain is not an option ;) You are most likely using a too old version of KMail. Sorry, there was just lask week a problem fixed in CVS to support news groups on this special server that didn't become visible before. > You are most likely using a too old version of KMail. KMail that came with KDE 2.2.2. So yep. OH well, I'll wait for Debian to include the fix (lazy me). damn, i've got the same problem is it just me or is there no kde-core-devel on that server ? Try this: nntp://news.uslinuxtraining.com/kde.kde-core-devel You can type that directly into the location of konqueror if your newsreader is buggy. i was using knode. seems kde-core-devel came later, now i did 'refresh group list' and kde-core-devel shows up. thanks anyway Great ! Gorgeous !! When will we get a nntp://news.kde.org ?? It would be so nice to be able to discuss KDE right here ! (html forums are nice but so heavy...I'd love to use Knode !!) G. Why? If you want to post why not use the KDE newsgroups in Usenet? comp.windows.x.kde The K Desktop Environment. (english) de.comp.os.unix.apps.kde Das K Desktop Environment. (german) They should be out there on every news server near you. well... It seems that newsgroups are slowly but surely gliding toward a large number of small dedicated newsservers rather than this bloated awful beast of shared namespace, where a cow wouldn't find it's calf... There are a couple of good reasons for that : 1) an url such as news.kde.org is actually far simpler to recall than comp.tr.deadly.burried.projects.kde. 2) there are no standard way of classifying a topic inside the shared tree (e.g: why is it comp.WINDOW$.x.kde, and not comp.unix.kde? comp.unix.cde exist, and it is a quite similar subject... and why linux.debian.gtk.gnome ? :-) so you often end with 3 or 4 newsgroups for the same topic, and you cross-post to all because, indeed, all claim to be THE most relevant newsgroup. In contrast, you KNOW that news.kde.org is the most relevant place. 3) It takes ages, and a lot of will, and a lot of efforts to have ONE newsgroup opened and widely spreaded, so you end with just one big KDE group. In contrast, a dedicated server may have 10th of groups - with fine grained topics. Imminent Death of Usenet Predicted! PNGs at 11. This will be really neat, as this is already possible for Mozilla mailing lists ! the Linux kernels and many other projects too ? No.
https://dot.kde.org/comment/94467
CC-MAIN-2018-13
refinedweb
710
76.42
I am a Full Stack developer, experience with building websites and web applications. Without any doubt, the use of React is increasing day by day. And also API(Application Programming Interface) is gaining more popularity in the modern era of technology. This article will guide a beginner to play with React to API. Assume that we want to fetch a list of pictures when a page gets loaded. So, First of all, we need an API that provides such information. The most used and easy to access API I have found is Unsplash. Now the main question, how our application is going to work. Without any delay, let's hop onto it. Our app is going to run on a web browser. As our page gets loaded, it will create an ajax request or a network request over to an API called Unsplash API. Unsplash is a company that provides free images for developers. Now the API takes that request and returns a big list of JSON data based on the keyword that is attached to it. Then we can take the required info from that JSON data to use it in our app. As we know the basic flow of our project, let's complete some initial setup to start the project. First of all, we need a dedicated project to contain all of our code. So let's create a react project by using command create-react-app. I am going to call my project ‘learn-api’. So on the terminal, we have to execute create-react-app learn-api. This will take some time to set up a default react app for us. I love to use the Visual Studio Code as a code editor. You can use anyone. As we create our project, we will see some default files and folders. We are gonna start from scratch. So let's delete the files inside the src folder. Create a file called index.js inside src and add the following lines inside it. import React from 'react'; import ReactDOM from 'react-dom'; import App from './components/App'; ReactDOM.render( <App />, document.querySelector('#root') ); Create a folder called components and inside that create file App.js. Also, fill them with the following line of codes. import React from 'react'; const App = () => <div>App</div>; export default App; On the terminal, run yarn start to execute the script file. If you see 'App' rendering on your browser, bravo, we have configured our app properly. Now let's have a look at our mock-up design. Our main purpose is to fetch data from API so I will make the UI simple. Will use bootstrap to style the page. To use bootstrap on terminal run yarn add bootstrap also at the top of index.js file add import 'bootstrap/dist/css/bootstrap.min.css'; line. That’s it, Bootstrap has been added in our app. In our mock-up, we can see mainly one component, ImageList. If we want, we can create another sub-component called Image. Let's start with the core one. So our App component is going to contain the ImageList component. Let's generate it. Create a file called ImageList.js inside the components folder and fill it with the following code. import React from 'react'; const ImageList = () => { return <div>ImageList</div> }; export default ImageList; We have to integrate ImageList with our App. To make that happen, we have to update our App.js. import React from 'react'; import ImageList from './ImageList'; const App = () => { return( <div> <ImageList /> </div> ); }; export default App; On our browser, we should see ImageList. So far it seems simple. Now we have to dive a little bit deeper. At this point, we will talk a little about React State and Lifecycle. The state is the heart of every react component. Which controls the behavior of a component. In one word, the State of a component is an object that contains some information that may get changed based on the lifetime of that component. To use state we have to convert our functional component into a class-based component(Technically, we can use State in functional component by using hooks system. But we are not going to use that). Now the main question, Why do we need state in our app. Well, because the task we need to cover matches the specification of State. As our page gets loaded, we want to store the list of images. After that, we want to render them on the page. In summary, our page indicates a lifetime(When it gets loaded) and an instance(After getting loaded we need to access the data.), which matches the definition of State. A component lifecycle method is a function that we can optionally define inside our class-based component. If we implement them in our component, it will get executed at some point. There are mainly three methods. In our scenario, we only need the componentDidMount method. Cause we want our page to get rendered and then the network request to happen. After that, we want to place images on our page. To complete that, the componentDidMount method is a perfect fit. So first thing first, let's convert our App component from functional component to class-based component. We need a constructor to assign our state initially. As initially, we don't have any list of images. So we will initialize our state with an empty list. We also need a method called componentDidMount(which will handle the task to fetch data from the API). For now, it will provide output on the browser console. If we see 'I will get called once the page gets loaded' on the browser, for sure, we have achieved our first milestone. To get a similar output, we have to add the following codes inside App.js. import React from 'react'; import ImageList from './ImageList'; class App extends React.Component { constructor(props) { super(props); this.state = {imageList: []} } componentDidMount() { console.log("I will get called once the page gets loaded"); } render() { return( <div> <ImageList /> </div> ); } } export default App; We haven't talked about the API yet. Let's have a look at it. We need to look at the official page of the API. The first thing we need to do is register as a developer. Once you are logged in you have to create a project. If you click on your apps, you will find an option to create one. They will ask you to accept some rules, and we are going to accept them. Once you click on accept terms, it's going to ask you for your app name and a description. As you fill the fields you will be redirected to another page. If you scroll down a little bit further you will see the keys section. Ther we will find Access key. As we have the access key, now all we need to know is the use of that key. So let's click on the documentation. As our target is to fetch photos based on a search that matches the text that we will provide. There it will show you how to use it. To use it in our page, we are going to use a third-party package. Cause it is not going to be the job of React library itself to request the Unsplash API. For us Axios gonna take care of that. To add it we need to close the running project. On terminal run 'yarn add Axios', it will install Axios on our project. We want to fetch data at our App component. So first thing first, we have to import Axios. Now by the convention, we put imports of the third party above the files we have created. As we import that we have to request the API. If we check the documentation it says we have to make a get request to the endpoint 'search/photos'. In Axios, we do 'get' requests by axios.get(). Which takes two arguments, the address of the API and a set of options. The location of the API is ''. The final endpoint for us is ''. If we want to get data, we have to authenticate first. The options to authenticate can be found in the Authorization section. I am going to use the header option for authorization purposes. In headers, we will create an object with Authorization key and will paste our access key, concatenating with 'Client-ID'. We have to add a params object with a key of query and the value will be 'flower'. So our componentDidMount method will look something like(paste your access key in the field of YOUR_ACCESS_KEY), componentDidMount() { axios.get('', { params: { query: 'flower' }, headers: { Authorization: 'Client-ID YOUR_ACCESS_KEY' } }); } If you want to look at what is happening, as we update the function. First, on your browser open developers' tools and click on the network section. Then press `ctrl + R` to record the reload. As you reload the page, you will see a field called `photos`. If you click on it, it will show you a set of fields. If you see that, congrats, we have covered a huge task. Now we need to extract that info and have to use it to render a list of images. But a new problem is in there. We have to wait until the network request returns the data. After that, we are going to render the ImageList. This can be covered using async and await syntax. we will use async before the componentDidMount and await before the axios.get and will save it in a variable called 'response'. All we need to do is destructure the response to get the targeted value. For us, the targeted value is 'result'. As we get the value, we have to update the state using setState function. Now we have a list of images to render.we will pass this list as a prop to the ImageList component. The final look of our App.js will be, import React from 'react'; import axios from 'axios'; import ImageList from './ImageList'; class App extends React.Component { constructor(props) { super(props); this.state = {imageList: []} } async componentDidMount() { const response = await axios.get('', { params: { query: 'flower' }, headers: { Authorization: 'Client-ID YOUR_ACCESS_KEY' } }); this.setState({ imageList: response.data.results }); console.log(this.state.imageList) } render() { return( <div className="container mt-4"> <ImageList images={ this.state.imageList }/> </div> ); } } export default App; We have to accept the props in our ImageList component as an argument. Destructure a prop makes it easy to access. Now, all we need to do is decorate the page. You can do it based on your preferences. I leaving it as simple as possible. The outcome of the ImageList container will look something like, import React from 'react'; const ImageList = ({ images }) => { console.log(images); if(images.length > 0) { return ( <div className="row"> {images.map(element => { return( <img src={element.urls.small}</img> ); })} </div> ); } return <div className="bg-warning text-center">Not Image to show</div>; }; export default ImageList; If you follow all the instructions properly you will see a page similar to Awesome, we have completed the task that we took as our mission. A long journey, from fetching data to render images on the page. Well playing with API in React is not a difficult job. Whatever the API or the task you need to cover the basic flow to play with data is similar. The more we can divide our task in smaller milestones it becomes easier. For further information: 1. 2. 3. 4. 5.- methods/ Create your free account to unlock your custom reading experience.
https://hackernoon.com/how-to-fetch-data-from-an-api-in-reactjs-j11q34k0
CC-MAIN-2021-17
refinedweb
1,937
68.16
swish A semantic web toolkit. See all snapshots swish appears in Module documentation for 0.10.0.1 There are no documented modules for this package.. I attempt to keep Swish buildable on recent GHC versions - at present back to GHC 7.4 although not actually tested on such - but it is done on a best-effort basis, so is not guaranteed., 2018 Doug Burke License Haskell and the Semantic Web Other Haskell packages for RDF support include Installation The following commands will install a command-line tool Swish along with the modules in the Swish namespace; documentation can be found on Hackage. With cabal Install a recent version of the Haskell platform and then try % cabal update % cabal install swish With stack Swish is available as part of the stackage curated package set (at least it is available in lts-11.1 to lts-12.2, the latest version at the time of writing). There are several stack configuration files, for different GHC versions: % cd swish % stack install % STACK_YAML=stack-8.2.yaml stack install % STACK_YAML=stack-8.0.yaml stack install % STACK_YAML=stack-7.10.yaml stack install % STACK_YAML=stack-7.8.yaml stack install Changes Bump to support polyparse and its GHC 8.6 compatibility. 0.10.0.0: Updated packages to support building with ghc 8.6 (increase base and containers limit). The reason for the version bump is that there has been some internal build cleanup and removal of un-needed imports (thanks to weeder), which was done a while ago and I now forget whether there were any changes due to imported instances, so it is safer to increase the minor version number. There has been *no* change to functionality. Default stack.yaml file is now ghc 8.4, and added a ghc 8.2 version. 0.9.2.1: Updated the minimum base package to 4.5.0.0 (GHC 7.4.1), and removed some old code for supporting older GHC/package versions. There are some warnings when compiling with GHC pre version 8 (redudant imports) which I am currently too lazy to fix, but patches are welcome. Support for building with older versions of GHC is on a best-effort basis. Applied some HLint suggestions. Updated to allow time-1.9.1 (but not time-1.9.0). Updated to lts 11.1 in stack.yaml. 0.9.2.0: Initial support for ghc 8.4. Updated to lts 10.8 in stack.yaml so was able to drop extra intern dependency version. Now builds with -Wcompat if ghc 8.0 or later is used. 0.9.1.10: Updated the upper bound on time to work with ghc 8.2.1. Updated stack.yaml to use lts 9.1. 0.9.1.9: Updated the upper bound on HUnit. This only affects the tests. Finally added a stack.yaml file (partly addressing issue #27). 0.9.1.8: Updated the upper bound on polyparse and directory. Fixed build failure of RDFGraphTest with time >= 1.6. Updated the copyright years in README.md. 0.9.1.7: Turns out one of the constraints is needed by GHC 7.6. 0.9.1.6: Updated dependencies to support GHC 8.0 (RC1) and updated the code to remove the un-needed constraints pointed out to me by the new compiler. 0.9.1.5: Removed the developer flag as it causes problems with 'cabal upload' and added 2015 to the copyright statement in the cabal file. There are no code changes compared to 0.9.1.3. 0.9.1.4: Updated the upper bound on the semigroups and HUnit dependencies. This never got released on hackage due to a disagreement between myself and Cabal. 0.9.1.3: Updated ghc 7.10 support. 0.9.1.2: Updated the upper bound on the polyparse dependency and fixed an error in the Swish.RDF.Query documentation (issue #25). Initial support for ghc 7.10. The list of changes has been removed from the cabal file since it is in the CHANGELOG. 0.9.1.1: Updated the upper bound on the semigroups and polyparse dependencies. 0.9.1.0: The network-uri flag has been added, which uses the network-uri package to provide Network.URI. This has lead to the minimum version of the network package being bumped to 2.4.0.0 to avoid some CPP issues during building. The Network.URI.Ord module now really does nothing. Updated the upper bound on the text dependency (requires an update to intern to use). Updated the upper bound on the time dependency to 1.5 (at this time the test suite will not build due to a failing dependency, so it is untested). 0.9.0.15: Updated the upper bound on the semigroups dependency. Renamed README to README.md 0.9.0.14: Updated the upper bound on the network dependency. 0.9.0.13: Updated the upper bound on the semigroups dependency (Issue #20). 0.9.0.12: Updated the upper bound on the text dependency to include version 1.1 (requires an update to semigroups to use). 0.9.0.11: Hackage has a better chance of finding the CHANGELOG file if it is included in the distribution! There are no changes to the library or examples. 0.9.0.10: Renamed CHANGES to CHANGELOG in the hope it will get picked up by Hackage. There are no changes to the library or examples. 0.9.0.9: Updated the upper bound on the semigroups dependency (thanks to Leif Warner). Updated the upper bound on the text dependency (however, requires changes to intern and semigroups to use). 0.9.0.8: Updated the upper bound on the semigroups dependency (thanks to Leif Warner). Display library version when running the runw3ctests executable and ensure there is a space between file name and test result in the screen output. 0.9.0.7 Minor documentation fix for Swish.RDF.Graph.quote. 0.9.0.6: Turtle/N3 output: \f characters in literals are converted to \u000C to work around some interoperability issues. Changed the test handling to use test-framework. 0.9.0.5: Turtle/N3 output: more changes for string literals containing double-quote characters. N3 Parser: parsing of string literals within three quote marks has been updated to match the Turtle parser. Turtle Parser: a few more error messages have been. 0.9.0.3: Fix minor Haddock issue with Swish.RDF.Parser.Utils.appendURIs. 0.9.0.2: Updated the upper bound on the polyparse dependency. 0.9.0.1: Updated the upper bound on the semigroups dependency (thanks to Leif Warner). 0.9.0.0: The module now builds against version 1.2 or 1.1 of the hashable package; as there have been significant changes to hashable in the 1.2 release I have decided to bump up the version number to 0.9 as a precaution. Fix failing test on 64-bit GHC (no library change). Very minor hlint changes. 0.8.0.3: Remove hashtable restriction (rewrite tests), remove binary constraint as it was apparently unneeded. Updates to support network version 2.4.0.0: the package now provides an Ord instance for Network.URI so Network.URI.Ord is now a no-op if network >= 2.4.0.0 is installed; fix up code to handle change to the API of relativeTo. 0.8.0.2: Restrict hashable to < 1.1.2.4 to try and avoid test failures - see This is a hack and the tests should be updated. Updated directory constraint to allow 1.2 on ghc 7.6. 0.8.0.1: Internal changes to Turtle/N3 formatting. No user-visible changes. 0.8.0.0: The LDGraph class now uses Set (Arc lb), rather than [Arc lb], for setArcs, getArcs, and update. Several data types - e.g. NSGraph - now use sets rather than lists. There are a number of API tweaks - e.g. the addition of Ord constraints and the removal of Functor, Foldable, and Traversable instances. Not all list of Arcs have been converted since a review is needed to see where it makes sense and where it does not. This definitely speeds up some operations but a full analysis has not been attempted. Replaced used of Data.LookupMap with Data.Map.Map. This has led to the removal of a number of language extensions from some modules. Added Network.URI.Ord to provide an ordering for URIs. A few other minor changes have been made: the removal of subset and equiv from Swish.Utils.ListHelpers; the ordering used for RDFLabel values has changed; added a Monoid instance for VarBinding; added Ord instances for a number of containers; removed some un-needed constraints; added Network.URI.Ord. The containers upper limit has been increased to support version 0.5. 0.7.0.2: Swish.QName.LName now requires all characters to be ASCII. This avoids downstream later when trying to convert a QName to a URI. 0.7.0.1: URI parsing has changed slightly. The only user-visible change is that error messages will be slightly different, in particular when given an IRI in Turtle or NTriples format. Unfortunately IRIs are still not supported. 0.7.0.0: For code that uses the Swish script language, the main change is to import Swish rather than Swish.RDF.SwishMain, and to note that the other Swish.RDF.Swish* modules are now called Swish.*. For code that uses the graph library, the main changes are that Swish.RDF.RDFGraph is now called Swish.RDF.Graph, the Lit constructor of the RDFLabel has been split into three (Lit, LangLit, and TypedLit) and a new LanguageTag type introduced, local names now use the LName type (previously they were just Text values), and the parsers and formatters have renamed to Swish.RDF.Parser.* and Swish.RDF.Formatter.*. - Moved a number of modules around: generic code directly into Swish and the Swish.RDF.RDF* forms renamed to Swish.RDF.*. Some modules have been moved out of the Swish.Utils.* namespace. Generic modules have been placed into the Data.* namespace. The Swish.RDF.Swish modules have been moved to Swish.* and Swish.RDF.SwishMain has been removed; use Swish instead. - Parsing modules are now in the Swish.RDF.Parser hierarchy and Swish.RDF.RDFParser has been renamed to Swish.RDF.Parser.Utils. - Formatting modules are now in the Swish.RDF.Formatter hierarchy. - RDF literals are now stored using the Lit, LangLit, or TypedLit constructors (from RDFLabel) rather than using just Lit. Language codes are now represented by Swish.RDF.Vocabulary.LanguageTag rather than as a ScopedName. - Local names are now represented by the Swish.QName.LName type rather than as a Text value. A few routines now return a Maybe value rather than error-ing out on invalid input. - Make use of Data.List.NonEmpty in a few cases. - Removed mkTypedLit from Swish.RDF.RDFParser; use Swish.RDF.RDFDatatype.makeDataTypedLiteral instead. - Removed asubj, apred and aobj from Swish.RDF.GraphClass and Swish.RDF.RDFGraph; use arcSubj, arcPred or arcObj instead. - Clarified that Swish.RDF.RDFDatatypeXsdDecimal is for xsd:decimal rather than xsd:double. - Removed the containedIn element of the LDGraph type class as it was un-used. The arguments to setArcs have been flipped, replaceArcs removed, add renamed to addGraphs, and emptyGraph added. - Removed various exported symbols from a range of modules as they were un-used. - Use Word32 rather than Int for label indexes (Swish.GraphMatch.LabelIndex) and in the bnode counts when formatting to N3/Turtle. - Minor clean up of the LookupMap module: mergeReplaceOrAdd and mergeReplace are now combined into mergeReplace; mapSelect, mapApplytoAll, and mapTranslate* have been removed; documentation slightly improved; and a few minor internal clean ups. - Removed the Swish.Utils.MiscHelpers module and moved single-use functionality out of Swish.Utils.ListHelpers. - Removed partCompareOrd, partCompareMaybe, partCompareListOrd and partCompareListPartOrd from Swish.Utils.PartOrderedCollection. - Do not define swap if using GHC >= 7.0.1. - Bump the upper constraint on the containers package to include version 0.5. - Support version 0.9 of intern using conditional compilation in Data.Interned.URI (for ghc 7.4) - Switch to Control.Exception.try from System.IO.Error.try to avoid complaints from ghc 7.4.2 0.6.5.3: - Updated cabal file so that tests depend on the library rather than individual modules. - fix intern to 0.8 for ghc < 7.4 and 0.8.* for greater 0.6.5.2: - Upgrade polyparse upper limit to include version 1.8. 0.6.5.1: - Haddock fixes. 0.6.5.0: - Dependency updates (updated filepath and removed array). 0.6.4.0: - Added support for xsd:decimal with Swish.RDF.RDFDatatypeXsdDecimal and Swish.RDF.MapXsdDecimal thanks to William Waites (<>). 0.6.3.0: - Added Swish.RDF.Vocabulary.SIOC. 0.6.2.1: - hackage build fixes only 0.6.2.0: - Split out vocabularies into Swish.RDF.Vocabulary.DublinCore, Swish.RDF.Vocabulary.FOAF, Swish.RDF.Vocabulary.Geo, Swish.RDF.Vocabulary.OWL, Swish.RDF.Vocabulary.RDF, and Swish.RDF.Vocabulary.XSD (some of these are new and additional terms added to the pre-existing vocabularies). A few have also been added to Swish.RDF.RDFGraph. - Minimum Cabal version has been updated to 1.9.2 as the tests have been converted to take advantage of the support now in Cabal; this means that the tests flag has been replaced by the --enable-tests option. - Change to the internal labelling of the RDFS container-property axioms in Swish.RDF.RDFProofContext. - Moved src/ to app/, and took advantage of the emptiness to move the library code into src/. 0.6.1.2: - corrected mtl constraint from >= 1 to >= 2 as the code does not build with mtl=1 and increased the upper limit on time to 1.4. 0.6.1.1: - minor improvements to error message when parsing Turtle, N3 or Ntriples format; a fragment of the remaining text to be parsed is included to provide some context for the user (it is still not as useful as the error message provided when parsec was being used). 0.6.1.0: - add support for reading and writing Turtle format and the Swish.RDF.TurtleFormatter and Swish.RDF.TurtleParser modules. No tests are provided at this time, although the parser handles most of the W3C test files; the issues include Network.URI not recognizing some IRI's and issues with equating XSD decimal and double values due to canonicalization. 0.6.0.1: - use the hashing interface provided by Data.Hashable rather than Swish.Utils.MiscHelpers. 0.6.0.0: - use the intern package to create Data.Interned.URI and use this within QName to speed up the equality check. 0.5.0.3: - Missed a FlexibleInstances pragma for ghc 7.2. 0.5.0.2: - Removed random and bytesttring package constraints and moved HUnit constraint to only apply if the tests flag is used. Added FlexibleInstances pragma for ghc 7.2 compatability. 0.5.0.1: - updated package constraints in cabal file to try and support building with ghc 7.2. No code change. 0.5.0.0: - The constructors for ScopedName and QName have been removed in order to allow a simple experimental optimisation (partly added in 0.4.0.0). A similar change has been added for Namespace (but no optimisation). 0.4.0.0: - Use polyparse rather than parsec-2 for parsing. As part of this, the parsing is done using Text rather than String values, where sensible. Some parsing should be faster now, but that is more due to a better use of parser combinators than differences in the parsing library used. No serious attempt at optimisation has been attempted. Unfortunately the error messages created on invalid input are significantly less helpfull than in the previous version. - removed Swish.Utils.DateTime and Swish.Utils.TraceHelpers - removed the following exported symbols from Swish.Utils.LookupMap: mapSortByKey, mapSortByVal - removed the following exported symbols from Swish.Utils.ListHelpers: allf, anyf, combinations, ffold, hasPartitions, mapset, pairsUngroup, powerSequences, powerSequences_inf - removed the following exported symbols from Swish.Utils.MiscHelpers: assert, stricmp, lower, quote - removed _ from exported symbols; the conversion is mainly to camel case but some may retain lower case parts (e.g. xxx_1_1_inv to xxx11inv). - Namespace now uses Maybe Text to store the optional prefix rather than an ad-hoc ""/"?" solution and URI rather than String for the URI. The local part of ScopedName is now Text. QName also uses URIs and no longer exports the constructor so newQName or related should be used to create QNames. We have currently lost N3 formatting of the default prefix (any default prefix elements end up getting written out fully qualified). The output is still valid N3 but perhaps not what you'd expect. 0.3.2.1: - use foldl' rather than foldl in some modules - Swish.Utils.*: - marked routines as deprecated 0.3.2.0: - Swish.RDF.N3Parser: - the parser no longer has a set of pre-defined namespaces with the aim of reducing un-needed statements on output. There is no API change worthy of a bump of the minor version but the behavioural change is large enough to make it worth while. 0.3.1.2: - Swish.RDF.RDFGraph: - toRDFGraph now uses the supplied labels to set up the namespace rather than using an empty namespace map. - minor documentation improvements. 0.3.1.1: - N3 I/O: - strings ending in a double-quote character are now written out correctly. - xsd:double values are not written using XSD canonical form/capital E but with using a lower-case exponent. - in input, xsd:double literals are converted to XSD canonical form (as stored in 'RDFLabel'), which can make simple textual comparison of literals fail. - RDFLabel: - the Eq instance now ignores the case of the language tag for literals - the Show instance uses the XSD canonical form for xsd:boolean, xsd:integer, xsd:decimal and xsd:double literals. - The ToRDFLable and FromRDFLabel classes replicate existing functionality in Swish.RDF.RDFDatatype; this should be cleaned up. 0.3.1.0: - NSGraph: - added Monoid instance with a Label constraint. - RDFLabel: - added IsString instance. - added ToRDFLabel and FromRDFLabel type classes for converting to and from RDFLabel along with instances for some standard Haskell types. - RDFTriple: - added toRDFTriple and fromRDFTriple functions that take advantage of the To/FromRDFLabel typeclasses to ease conversion from/to Arcs. - QName and ScopedName: - added IsString instance. - UTCTime support (To/FromRDFLabel) has required adding old-locale and time to the package constraints. - added xsd_dateTime export to Swish.RDF.Vocabulary. - added Swish and Swish.RDF modules for documentation purposes. - marked Swish.Utils.DateTime as deprecated. - the N3 formatter now writes true, 1, 1.2, 1.2e34 rather than "true"^^xsd:boolean, "1"^^xsd:integer, "1.2"^^xsd:decimal and "1.2e34"^^xsd:double. - improved test coverage. 0.3.0.3: - changed scripts/SwishExample.ss script so that the proof succeeds. - minor documentation improvements, including the addition of the Swish script format in Swish.RDF.SwishScript. - minor changes to behavior of Swish in several edge cases. 0.3.0.2: - bugfix: stop losing triples with a bnode subject when using the N3 Formatter which also makes the scripts/SwishTest.ss script run successfully. - several commands in Swish scripts now create screen output as an aid to debugging. - added the developer flag. 0.3.0.1: - updated the Swish script parser to work with the changes in 0.3.0.0 (reported by Rick Murphy). - several example scripts are installed in the scripts/ directory, although only VehicleCapacity.ss works with this release. 0.2.1 to 0.3.0.0: - Renamed module hierarchy from Swish.HaskellRDF.* Swish.HaskellUtils.* to Swish.RDF.* Swish.Utils.* - removed modules, either because un-needed or replaced by other libraries: *) replaced with Data.Traversable instances Swish.HaskellUtils.FunctorM *) replaced with routines from Network.URI Swish.HaskellRDF.ProcessURI Swish.HaskellRDF.ParseURI *) copy of a Parsec module Swish.HaskellRDF.ParsecLanguage *) Replaced with Data.List.sort as only used the stableQuickSort routine Swish.HaskellRDF.Sort.* *) Replaced use with simple parsec parser Swish.HaskellRDF.Dfa.Dfa *) Replaced with 'Either String' Swish.HaskellUtils.ErrorM *) Un-needed Swish.HaskellRDF.Parse Swish.HaskelUtils.AccumulateM Swish.HaskellUtils.TestHelpers - removed Either String instance from ErrorM - should now compile with mtl-2 - added upper and lower bounds to package dependencies - a number of Test executables have been removed, either because of a removed module or missing data files. - updated Haddock documentation to build - added the following flags tests - compile tests or not? hpc - compile tests with -hpc? - support for the NTriples format has been added (e.g. the -nt flag in Swish) - the N3 parser and formatter have been re-written to better match the latest specification. Some previously valid N3 files will likely no-linger parse since support for constructs like 'this' and ':-' have been removed. The N3 parser does not handle @forAll statements and ignores any @forSome statements. It is also slower than the original parser. The formatter has been updated to make better use of the '[]', '()' and '{}' syntactic short cuts. Strings containing "unexpected" escape combinations - such as \a - may not be handled correctluy. - change in behavior of Swish command-line tool when given invalid arguments as it now errors out if given an invalid argument before processing any input.
https://www.stackage.org/nightly-2019-06-06/package/swish-0.10.0.1
CC-MAIN-2019-26
refinedweb
3,552
60.92
Thrift Client The following examples are of using the Thrift API directly. You will need to following libraries at a minimum: - blur-thrift-*.jar - blur-util-*.jar - slf4j-api-1.6.1.jar - slf4j-log4j12-1.6.1.jar - commons-logging-1.1.1.jar - log4j-1.2.15.jar NoteOther versions of these libraries could work, but these are the versions that Blur currently uses. Getting A Client Example Connection StringThe connection string can be parsed or constructed through "Connection" object. If you are using the parsed version there are some options. At a minimum you will have to provide hostname and port: You can list multiple hosts:You can list multiple hosts: host1:40010 You can add a SOCKS proxy server for each host:You can add a SOCKS proxy server for each host: host1:40010,host2:40010 You can also add a timeout on the socket of 90 seconds (the default is 60 seconds):You can also add a timeout on the socket of 90 seconds (the default is 60 seconds): host1:40010/proxyhost1:6001 Multiple hosts with a different timeout:Multiple hosts with a different timeout: host1:40010/proxyhost1:6001#90000 Here is all options together:Here is all options together: host1:40010,host2:40010,host3:40010#90000 host1:40010/proxyhost1:6001,host2:40010/proxyhost1:6001#90000 Thrift ClientClient Example 1: Client Example 2:Client Example 2: Iface client = BlurClient.getClient("controller1:40010,controller2:40010"); Client Example 3:Client Example 3: Connection connection = new Connection("controller1:40010"); Iface client = BlurClient.getClient(connection); Client Example 4:Client Example 4: BlurClientManager.execute("controller1:40010,controller2:40010", new BlurCommand<T>() { @Override public T call(Client client) throws BlurException, TException { // your code here... } }); List<Connection> connections = BlurClientManager.getConnections("controller1:40010,controller2:40010"); BlurClientManager.execute(connections, new BlurCommand<T>() { @Override public T call(Client client) throws BlurException, TException { // your code here... } }); Query Example This is a simple example of how to run a query via the Thrift API and get back search results. By default the first 10 results are returned with only row ids to the results. Iface client = BlurClient.getClient("controller1:40010,controller2:40010"); Query query = new Query(); query.setQuery("+docs.body:\"Hadoop is awesome\""); BlurQuery blurQuery = new BlurQuery(); blurQuery.setQuery(query); BlurResults results = client.query("table1", blurQuery); System.out.println("Total Results: " + results.totalResults); for (BlurResult result : results.getResults()) { System.out.println(result); } Query Example with Data This is an example of how to run a query via the Thrift API and get back search results with data. All the columns in the "fam0" family are returned for each Record in the Row. Iface client = BlurClient.getClient("controller1:40010,controller2:40010"); Query query = new Query(); query.setQuery("+docs.body:\"Hadoop is awesome\""); Selector selector = new Selector(); // This will fetch all the columns in family "fam0". selector.addToColumnFamiliesToFetch("fam0"); // This will fetch the "col1", "col2" columns in family "fam1". Set cols = new HashSet (); cols.add("col1"); cols.add("col2"); selector.putToColumnsToFetch("fam1", cols); BlurQuery blurQuery = new BlurQuery(); blurQuery.setQuery(query); blurQuery.setSelector(selector); BlurResults results = client.query("table1", blurQuery); System.out.println("Total Results: " + results.totalResults); for (BlurResult result : results.getResults()) { System.out.println(result); } Fetch Data This is an example of how to fetch data via the Thrift API. All the records of the Row "rowid1" are returned. If it is not found then Row would be null. Iface client = BlurClient.getClient("controller1:40010,controller2:40010"); Selector selector = new Selector(); selector.setRowId("rowid1"); FetchResult fetchRow = client.fetchRow("table1", selector); FetchRowResult rowResult = fetchRow.getRowResult(); Row row = rowResult.getRow(); for (Record record : row.getRecords()) { System.out.println(record); } Mutate Example This is an example of how to perform a mutate on a table and either add or replace an existing Row. Iface client = BlurClient.getClient("controller1:40010,controller2:40010"); Record record1 = new Record(); record1.setRecordId("recordid1"); record1.setFamily("fam0"); record1.addToColumns(new Column("col0", "val0")); record1.addToColumns(new Column("col1", "val1")); Record record2 = new Record(); record2.setRecordId("recordid2"); record2.setFamily("fam1"); record2.addToColumns(new Column("col4", "val4")); record2.addToColumns(new Column("col5", "val5")); List recordMutations = new ArrayList (); recordMutations.add(new RecordMutation(RecordMutationType.REPLACE_ENTIRE_RECORD, record1)); recordMutations.add(new RecordMutation(RecordMutationType.REPLACE_ENTIRE_RECORD, record2)); // This will replace the exiting Row of "rowid1" (if one exists) in table "table1". It will // write the mutate to the write ahead log (WAL) and it will not block waiting for the // mutate to become visible. RowMutation mutation = new RowMutation("table1", "rowid1", true, RowMutationType.REPLACE_ROW, recordMutations, false); mutation.setRecordMutations(recordMutations); client.mutate(mutation); Shortened Mutate Example This is the same example as above but is shorted with a help class. import static org.apache.blur.thrift.util.BlurThriftHelper.*; Iface client = BlurClient.getClient("controller1:40010,controller2:40010"); // This will replace the exiting Row of "rowid1" (if one exists) in table "table1". It will // write the mutate to the write ahead log (WAL) and it will not block waiting for the // mutate to become visible. RowMutation mutation = newRowMutation("table1", "rowid1", newRecordMutation("fam0", "recordid1", newColumn("col0", "val0"), newColumn("col1", "val2")), newRecordMutation("fam1", "recordid2", newColumn("col4", "val4"), newColumn("col5", "val4"))); client.mutate(mutation); Shell The shell can be invoked by running: Also any shell command can be invoked as a cli command by running:Also any shell command can be invoked as a cli command by running: $BLUR_HOME/bin/blur shell The following rules are used when interacting with the shell:The following rules are used when interacting with the shell: $BLUR_HOME/bin/blur <command> # For example to get help $BLUR_HOME/bin/blur help - Arguments are denoted by "< >". - Optional arguments are denoted by "[ ]". - Options are denoted by "-". - Multiple options / arguments are denoted by "*". Table Commands create Description: Create the named table. Run -h for full argument list. create -t <tablename> -c <shardcount> enable Description: Enable the named table. enable <tablename> disable Description: Disable the named table. disable <tablename> remove Description: Remove the named table. remove <tablename> truncate Description: Truncate the named table. truncate <tablename> describe Description: Describe the named table. describe <tablename> list Description: List tables. list schema Description: Schema of the named table. schema <tablename> stats Description: Print stats for the named table. stats <tablename> layout Description: List the server layout for a table. layout <tablename> parse Description: Parse a query and return string representation. parse <tablename> <query> definecolumn Description: Defines a new column in the named table. definecolumn <table name> <family> <column name> <type> [-s <sub column name>] [-F] [-p name value]* Data Commands query Description: Query the named table. query <tablename> <query> get Description: display the specified row get <tablename> <rowid> mutate Description: Mutate the specified row. mutate <tablename> <rowid> <recordid> <columnfamily> <columnname>:<value>* delete Description: Delete the specified row. delete <tablename> <rowid> highlight Description: Toggle highlight of query output on/off. highlight selector Description: Manage the default selector. selector reset | add <family> [<columnName>*] Cluster Commands controllers Description: List controllers. controllers shards Description: list shards shards <clustername> clusterlist Description: List the clusters. clusterlist cluster Description: Set the cluster in use. cluster <clustername> safemodewait Description: Wait for safe mode to exit. safemodewait [<clustername>] top Description: Top for watching shard clusters. top [<cluster>] Shell Commands Description: Display help. debug Description: Toggle debugging on/off. debug timed Description: Toggle timing of commands on/off. timed quit Description: Exit the shell. quit reset Description: Resets the terminal window. reset Map Reduce, true);. CSV Loader The CSV Loader program can be invoked by running: $BLUR_HOME/bin/blur csvloader CautionAlso the machine that will execute this command will need to have Hadoop installed and configured locally, otherwise the scripts will not work correctly. usage: csvloader The "csvloader" command is used to load delimited into a Blur table. The required options are "-c", "-t", "-d". The standard format for the contents of a file is:"rowid,recordid,family,col1,col2,...". However there are several options, such as the rowid and recordid can be generated based on the data in the record via the "-A" and "-a" options. The family can assigned based on the path via the "-I" option. The column name order can be mapped via the "-d" option. Also you can set the input format to either sequence files vie the "-S" option or leave the default text files. -A No Row Ids - Automatically generate row ids for each record based on a MD5 has of the data within the record. -a No Record Ids - Automatically generate record ids for each record based on a MD5 has of the data within the record. -b <size> The maximum number of Lucene documents to buffer in the reducer for a single row before spilling over to disk. (default 1000) -c <controller*> * Thrift controller connection string. (host1:40010 host2:40010 ...) -C <minimum maximum> Enables a combine file input to help deal with many small files as the input. Provide the minimum and maximum size per mapper. For a minimum of 1GB and a maximum of 2.5GB: (1000000000 2500000000) -d <family column*> * Define the mapping of fields in the CSV file to column names. (family col1 col2 col3 ...) -I <family path*> The directory to index with a family name, the family name is assumed to NOT be present in the file contents. (family hdfs://namenode/input/in1) -i <path*> The directory to index, the family name is assumed to BE present in the file contents. (hdfs://namenode/input/in1) -l Disable the use storage local on the server that is running the reducing task and copy to Blur table once complete. (enabled by default) -o Disable optimize indexes during copy, this has very little overhead. (enabled by default) -p <codec> Sets the compression codec for the map compress output setting. (SNAPPY,GZIP,BZIP,DEFAULT, or classname) -r <multiplier> The reducer multipler allows for an increase in the number of reducers per shard in the given table. For example if the table has 128 shards and the reducer multiplier is 4 the total number of reducers will be 512, 4 reducers per shard. (default 1) -s <delimiter> The file delimiter to be used. (default value ',') NOTE: For special charactors like the default hadoop separator of ASCII value 1, you can use standard java escaping (\u0001) -S The input files are sequence files. -t <tablename> * Blur table name. JDBC The JDBC driver is very experimental and is currently read-only. It has a very basic SQL-ish language that should allow for most Blur queries. Basic SQL syntax will work for example: You may also use Lucene syntax by wrapping the Lucene query in a "query()" function:You may also use Lucene syntax by wrapping the Lucene query in a "query()" function: select * from testtable where fam1.col1 = 'val1' Here is a screenshot of the JDBC driver in SQuirrel:Here is a screenshot of the JDBC driver in SQuirrel: select * from testtable where query(fam1.col1:val?)
http://incubator.apache.org/blur/docs/0.2.0/using-blur.html
CC-MAIN-2015-06
refinedweb
1,788
50.33
How to Implement LRU Cache in Java Last modified: August 3, 2021 1. Overview In this tutorial, we're going to learn about the LRU cache and take a look at an implementation in Java. 2. LRU Cache The Least Recently Used (LRU) cache is a cache eviction algorithm that organizes elements in order of use. In LRU, as the name suggests, the element that hasn't been used for the longest time will be evicted from the cache. For example, if we have a cache with a capacity of three items: Initially, the cache is empty, and we put element 8 in the cache. Elements 9 and 6 are cached as before. But now, the cache capacity is full, and to put the next element, we have to evict the least recently used element in the cache. Before we implement the LRU cache in Java, it's good to know some aspects of the cache: - All operations should run in order of O(1) - The cache has a limited size - It's mandatory that all cache operations support concurrency - If the cache is full, adding a new item must invoke the LRU strategy 2.1. Structure of an LRU Cache Now, let's think about a question that will help us in designing the cache. How can we design a data structure that could do operations like reading, sorting (temporal sorting), and deleting elements in constant time? It seems that to find the answer to this question, we need to think deeply about what has been said about LRU cache and its features: - In practice, LRU cache is a kind of Queue — if an element is reaccessed, it goes to the end of the eviction order - This queue will have a specific capacity as the cache has a limited size. Whenever a new element is brought in, it is added at the head of the queue. When eviction happens, it happens from the tail of the queue. - Hitting data in the cache must be done in constant time, which isn't possible in Queue! But, it is possible with Java's HashMap data structure - Removal of the least recently used element must be done in constant time, which means for the implementation of Queue, we'll use DoublyLinkedList instead of SingleLinkedList or an array So, the LRU cache is nothing but a combination of the DoublyLinkedList and the HashMap as shown below: The idea is to keep the keys on the Map for quick access to data within the Queue. 2.2. LRU Algorithm The LRU algorithm is pretty easy! If the key is present in HashMap, it's a cache hit; else, it's a cache miss. We'll follow two steps after a cache miss occurs: - Add a new element in front of the list. - Add a new entry in HashMap and refer to the head of the list. And, we'll do two steps after a cache hit: - Remove the hit element and add it in front of the list. - Update HashMap with a new reference to the front of the list. Now, it's time to see how we can implement LRU cache in Java! 3. Implementation in Java First, we'll define our Cache interface: public interface Cache<K, V> { boolean set(K key, V value); Optional<V> get(K key); int size(); boolean isEmpty(); void clear(); } Now, we'll define the LRUCache class that represents our cache: public class LRUCache<K, V> implements Cache<K, V> { private int size; private Map<K, LinkedListNode<CacheElement<K,V>>> linkedListNodeMap; private DoublyLinkedList<CacheElement<K,V>> doublyLinkedList; public LRUCache(int size) { this.size = size; this.linkedListNodeMap = new HashMap<>(maxSize); this.doublyLinkedList = new DoublyLinkedList<>(); } // rest of the implementation } We can create an instance of the LRUCache with a specific size. In this implementation, we use HashMap collection for storing all references to LinkedListNode. Now, let's discuss operations on our LRUCache. 3.1. Put Operation The first one is the put method: public boolean put(K key, V value) { CacheElement<K, V> item = new CacheElement<K, V>(key, value); LinkedListNode<CacheElement<K, V>> newNode; if (this.linkedListNodeMap.containsKey(key)) { LinkedListNode<CacheElement<K, V>> node = this.linkedListNodeMap.get(key); newNode = doublyLinkedList.updateAndMoveToFront(node, item); } else { if (this.size() >= this.size) { this.evictElement(); } newNode = this.doublyLinkedList.add(item); } if(newNode.isEmpty()) { return false; } this.linkedListNodeMap.put(key, newNode); return true; } First, we find the key in the linkedListNodeMap that stores all keys/references. If the key exists, a cache hit happened, and it's ready to retrieve CacheElement from the DoublyLinkedList and move it to the front. After that, we update the linkedListNodeMap with a new reference and move it to the front of the list: public LinkedListNode<T> updateAndMoveToFront(LinkedListNode<T> node, T newValue) { if (node.isEmpty() || (this != (node.getListReference()))) { return dummyNode; } detach(node); add(newValue); return head; } First, we check that the node is not empty. Also, the reference of the node must be the same as the list. After that, we detach the node from the list and add newValue to the list. But if the key doesn't exist, a cache miss happened, and we have to put a new key into the linkedListNodeMap. Before we can do that, we check the list size. If the list is full, we have to evict the least recently used element from the list. 3.2. Get Operation Let's take a look at our get operation: public Optional<V> get(K key) { LinkedListNode<CacheElement<K, V>> linkedListNode = this.linkedListNodeMap.get(key); if(linkedListNode != null && !linkedListNode.isEmpty()) { linkedListNodeMap.put(key, this.doublyLinkedList.moveToFront(linkedListNode)); return Optional.of(linkedListNode.getElement().getValue()); } return Optional.empty(); } As we can see above, this operation is straightforward. First, we get the node from the linkedListNodeMap and, after that, check that it's not null or empty. The rest of the operation is the same as before, with just one difference on the moveToFront method: public LinkedListNode<T> moveToFront(LinkedListNode<T> node) { return node.isEmpty() ? dummyNode : updateAndMoveToFront(node, node.getElement()); } Now, let's create some tests to verify that our cache works fine: @Test public void addSomeDataToCache_WhenGetData_ThenIsEqualWithCacheElement(){ LRUCache<String,String> lruCache = new LRUCache<>(3); lruCache.put("1","test1"); lruCache.put("2","test2"); lruCache.put("3","test3"); assertEquals("test1",lruCache.get("1").get()); assertEquals("test2",lruCache.get("2").get()); assertEquals("test3",lruCache.get("3").get()); } Now, let's test the eviction policy: @Test public void addDataToCacheToTheNumberOfSize_WhenAddOneMoreData_ThenLeastRecentlyDataWillEvict(){ LRUCache<String,String> lruCache = new LRUCache<>(3); lruCache.put("1","test1"); lruCache.put("2","test2"); lruCache.put("3","test3"); lruCache.put("4","test4"); assertFalse(lruCache.get("1").isPresent()); } 4. Dealing With Concurrency So far, we assumed that our cache was just used in a single-threaded environment. To make this container thread-safe, we need to synchronize all public methods. Let's add a ReentrantReadWriteLock and ConcurrentHashMap into the previous implementation: public class LRUCache<K, V> implements Cache<K, V> { private int size; private final Map<K, LinkedListNode<CacheElement<K,V>>> linkedListNodeMap; private final DoublyLinkedList<CacheElement<K,V>> doublyLinkedList; private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); public LRUCache(int size) { this.size = size; this.linkedListNodeMap = new ConcurrentHashMap<>(size); this.doublyLinkedList = new DoublyLinkedList<>(); } // ... } We prefer to use a reentrant read/write lock rather than declaring methods as synchronized because it gives us more flexibility in deciding when to use a lock on reading and writing. 4.1. writeLock Now, let's add a call to writeLock in our put method: public boolean put(K key, V value) { this.lock.writeLock().lock(); try { //.. } finally { this.lock.writeLock().unlock(); } } When we use writeLock on the resource, only the thread holding the lock can write to or read from the resource. So, all other threads that are either trying to read or write on the resource will have to wait until the current lock holder releases it. This is very important to prevent a deadlock. If any of the operations inside the try block fails, we still release the lock before exiting the function with a finally block at the end of the method. One of the other operations that needs writeLock is evictElement, which we used in the put method: private boolean evictElement() { this.lock.writeLock().lock(); try { //... } finally { this.lock.writeLock().unlock(); } } 4.2. readLock And now it's time to add a readLock call to the get method: public Optional<V> get(K key) { this.lock.readLock().lock(); try { //... } finally { this.lock.readLock().unlock(); } } It seems exactly what we have done with the put method. The only difference is that we use a readLock instead of writeLock. So, this distinction between the read and write locks allows us to read the cache in parallel while it's not being updated. Now, let's test our cache in a concurrent environment: @Test public void runMultiThreadTask_WhenPutDataInConcurrentToCache_ThenNoDataLost() throws Exception { final int size = 50; final ExecutorService executorService = Executors.newFixedThreadPool(5); Cache<Integer, String> cache = new LRUCache<>(size); CountDownLatch countDownLatch = new CountDownLatch(size); try { IntStream.range(0, size).<Runnable>mapToObj(key -> () -> { cache.put(key, "value" + key); countDownLatch.countDown(); }).forEach(executorService::submit); countDownLatch.await(); } finally { executorService.shutdown(); } assertEquals(cache.size(), size); IntStream.range(0, size).forEach(i -> assertEquals("value" + i,cache.get(i).get())); } 5. Conclusion In this tutorial, we learned what exactly an LRU cache is, including some of its most common features. Then, we saw one way to implement an LRU cache in Java and explored some of the most common operations. Finally, we covered concurrency in action using the lock mechanism. As usual, all the examples used in this article are available over on GitHub.
https://www.baeldung.com/java-lru-cache
CC-MAIN-2021-49
refinedweb
1,590
53.92
Custom styling Reactstrap components Nicholas Kolodziej ・2 min read Sometimes, the default colors provided by Reactstrap just don't fit the color scheme you are going for. On my most recent project I learned a lot about styling custom components - here are a few ways our team accomplished this. Inside a separate file named Style.jsx, const nav = { background: '#2F4A5E', }; const pageHeader = { color: 'black', 'border-radius': '5px', 'text-align': 'center', }; We created basic objects for each of our components. One thing to note however is the syntax 'border-radius' and 'text-align' are written. Resembles CSS right ? Export it out, module.exports = { nav, pageHeader } Import it in (for desired component styling), import { pageHeader, nav } from './Style.jsx'; And apply to the desired components. <div style={pageHeader}><h2>Settings</h2></div></code> <Navbar style={nav} dark</code> You can also do it inline.. <div style={{color: 'black', borderRadius: '5px', textAlign: 'center'}}> Notice the syntax for borderRadius and textAlign have changed. While the former will work, you will get errors asking if you REALLY meant borderRadius. <Navbar style={{background: '#2F4A5E'} dark I always prefer the former - perhaps it feels more like CSS to which im familiar, and while inline is also very css like - it can take up a lot of space sometimes and obstruct the flow of your code. <div id={id} onClick={() => { changeView(post) }} onMouseEnter={() => {this.onHover(post)}} onMouseLeave={() => {this.onHoverLeave(post.tags)}} style={{ boxShadow: '10px 10px 10px #CCCCCC', background: `url(${post.img})`, height: '250px', width: '200px', margin: '15px', padding: '10px', backgroundSize: 'cover', backgroundRepeat: 'no-repeat', border: '1px solid #000000' }}> If you want to control and adjust several components at once compared to one at a time - import it. However I have found that very specific components - I continued to write inline. Try it out!
https://dev.to/kolonick/custom-styling-reactstrap-components-1phk
CC-MAIN-2019-43
refinedweb
295
57.16
0 Hi everybody, I just start learning python, and beacuse of that I don't really know all usefull packages and methods, that this language has. But I would like to know how would you write this program, or how would you change it (or may be some part of it) to looks it more elegant and to be more efficient. here is the code: import random class FrequenceCounter: count = 0 def update_count(self): self.count += 1 def get_count(self): return self.count nums = xrange(10) freq_of_num = {} runs = int(raw_input("Enter number of runs ")) for x in xrange(10): freq_of_num[x] = FrequenceCounter() for x in xrange(runs): freq = random.choice(nums) freq_of_num[freq].update_count() for x in xrange(10): print "Number %d occurred %d" % (x, freq_of_num[x].get_count()) And that's how my output looks like: Enter number of runs 88 Number 0 occurred 9 Number 1 occurred 12 Number 2 occurred 10 Number 3 occurred 2 Number 4 occurred 5 Number 5 occurred 13 Number 6 occurred 9 Number 7 occurred 10 Number 8 occurred 11 Number 9 occurred 7
https://www.daniweb.com/programming/software-development/threads/423799/is-there-more-efficient-elegant-way-to-write-this-little-program
CC-MAIN-2017-17
refinedweb
181
64.95
Okay, I know my issue is in my add function [adds to the array] so I'll only post that portion of the code Oh and the compareTo codeOh and the compareTo codepublic void add(Comparable<Wrapper> c) { for(int x =0; x<CompFiles.length ; x++) { if(CompFiles[x] == null ) { CompFiles[x] = c; return; } } for(int x = 0; x<CompFiles.length; x++) { int stat = c.compareTo((Wrapper) CompFiles[x]); if(stat > 0) { CompFiles[x]=c; return; } } } Which is in the Wrapper class.Which is in the Wrapper class.@Override public int compareTo(Wrapper o) { return (int) (this.size - o.getSize()); } Now my best explaination of the bug " It works great up to a list of 3, or if the list can fit all the files. Other than that, I was trying it with a directory with 16 files, the first 3 were correct, but the 4th on the list was actually the 7th biggest in the directory -- [using a list of 4]. I can't figure out the bug for the life of me. " [sent to my teacher along with my code, YES IT IS A COLLEGE PROJECT, but I ALREADY HANDED IT IN... even though I handed it in already -- I wanna figure out whats wrong with it]
http://www.javaprogrammingforums.com/whats-wrong-my-code/15208-finding-biggest-files-making-list.html
CC-MAIN-2014-15
refinedweb
209
71.34
Compiler Error CS0103 Visual Studio 2013 The name 'identifier' does not exist in the current context An attempt was made to use a name that does not exist in the class, namespace, or scope. Check the spelling of the name and check your using directives and assembly references to make sure that the name that you are trying to use is available. This error frequently occurs if you declare a variable in a loop or a try or if block and then attempt to access it from an enclosing code block or a separate code block, as shown in the following example. using System; class MyClass1 { public static void Main() { try { // The following declaration is only available inside the try block. MyClass1 conn = new MyClass1(); } catch (Exception e) { // The following expression causes error CS0103, because variable // conn only exists in the try block. if (conn != null) Console.WriteLine("{0}", e); } } } The following example resolves the error. using System; class MyClass2 { public static void Main() { // To resolve the error in the example, the first step is to // move the declaration of conn out of the try block. The following // declaration is available throughout the Main method. MyClass2 conn = null; try { // Inside the try block, use the conn variable that you declared // previously. conn = new MyClass2(); } catch (Exception e) { // The following expression no longer causes an error, because // the declaration of conn is in scope. if (conn != null) Console.WriteLine("{0}", e); } } } Show:
http://msdn.microsoft.com/en-us/library/t8zbaa6f.aspx
CC-MAIN-2013-48
refinedweb
241
58.11
Hei Eric, thank you for your reply, i will answer your questions below: eric chacon wrote: > Wolle, > > I think the problem may lie in your JVM: Setting a reference to the object > to null does not automatically free the memory--it just allows the garbage > collector to clear it whenever it runs. > > In many (most?) JVMs, garbage collection is single-threaded. It may run at > a fairly low priority. this was tested with JDK1.3.0_02 (Hotspot Server/client) and IBMJava2_1.3 > > > Therefore, under stress testing, it is possible for you to eat up a lot of > memory before the garbage collector has a chance to pool it. Yes, of course, I gave it a half day an nothing becomes free. I had also see, that all process was cloesd (tool top under Linux). > > > One way to test this would be to let your process run under high load, and > then stop your load and watch over time to see if the memory is released. > > If, in fact, it is a garbage collection problem, you have a couple of > choices: 1) upgrade your JVM and see if that helps. 2) pool your Heavy > Objects: create a store of them, and re-use them. > I have done something else, I have synchronize the Object and use create only on Object: class Servlet extends HttpServlet{ HeavyObject ho = null; public doPost(request,response) throws IOException{ if (ho != null){ ho = new HeavyObject(); } } } that works, so the Object will only created once, but I have to synchronize the Object. > > If you're doing load balancing/workload management, this might cause some > problems (you'll need to persist your objects to a common database), but if > everything's running on one machine, pooling might be an easy, convenient > solution. > > Cheers, > Eric > thanks and Greetings, Michael > > >From: Wolle <wolle@dorf.RWTH-Aachen.DE> > >Reply-To: mwollenhaupt@web.de > >To: tomcatUser <tomcat-user@jakarta.apache.org> > >Subject: My growing memory problem resolved > >Date: Mon, 07 May 2001 03:45:55 +0200 > > > >Hello , > >i have posted for a few day's a problem thats the Memory usage will > >extremly grow up when I refresh a page very often > >The Problem was, that I creat a "heavy memory" Object on each refresh. > >Normally I have 81 processes run at all, when execute the "stress-test" > >the processcount will rise till ~ 230. And each new Prcoesses will creat > >a new Object. > >When I stop the stress-test, the processcount becomes normal (~ 100) , > >but the memory usage won't fall. > >But the new created Object have now refernces that points to them. > >But why they don't becomes free ???? My code is somthing like this: > > > >public class servlet1 extends HttpServlet{ > > .. > >... > >.. > >.. > > public void doPost(HttpServletRequest request,HttpServletResponse > >response) > > throws IOException{ > > try{ > > HeavyObject ho = new HeavyObject(); > > } > > finally{ > > ho = null; // for testing implementet, but this gives the > >memory not free, also > > } > > > > } > >} > > > >Is this a Bug in Tomcat ? Or is it a Bug from me ? Or som Java > >misunderstood ? > >Why will the Object not becomes free, when some of the Threads will be > >closed , > >and the only reference is in the closed Thread ? > > > > > >Greetings, > >Michael > > > > _________________________________________________________________ > Get your FREE download of MSN Explorer at
http://mail-archives.apache.org/mod_mbox/tomcat-users/200105.mbox/%3C3AF68024.DA86F7AC@dorf.rwth-aachen.de%3E
CC-MAIN-2014-23
refinedweb
528
71.65
- I've tried also with 5 and 10 minutes, neither works for me. Scan Multibranch Pipeline Log is not being updated, shows only information from last scan. Can you tell me more about rate limit? Isn't this just fetching refs? If this scanning is so aggressive, is there any other setting to just fetch known branches? All I really want to achieve, is automated build of master branch after commit. It should be possible to do without webhooks, I guess. So I just tried on my own server, And as you can see the periodically runs, the initial scan was 9:32 and then when I check again it runs at 9:34... and then again at 9:36 now there is a bug, if I recall correctly, where the 1 minute interval gets checked against 60 seconds and hence will almost always be 60 seconds since last check and so only every other minute will it run... but if I understand correctly that should only apply to the 1 minute interval and not affect the larger intervals (it's due to the cron tasks running once per minute) But the tasks are running for me. I'd check your master to see if something is blocking either the Jenkins cron thread or one of the jenkins.util.Timer threads. Additionally, there is a limit of 5 concurrent indexing/scanning tasks: so if you have other jobs that are say indexing GitHub and they have hit the GitHub rate limit, your indexing may just be blocked waiting for those to complete I did manage to get the schedule somehow working on Jenkins 2.60.3 on docker image. However it's running every ~12 minutes, despite being set to 5 minute interval. I also have 2.79 on my mac and as docker image, and the schedule isn't working on either of them. UPDATE With 2 minute interval, indexing is done every 3 minutes. Maybe related to? Can you see if that patch works? After updating all of my plugins early last week, I'm now also seeing what appears to be at least a variant of this issue. We have a GitHub Organization folder setup, which correctly scans our entire org. Only one of our repositories at the moment contains Jenkins files, and it used to correctly scan all branches within that repository and pickup them up. After updating last week, the branches are no longer scanned.. Going into the repository and looking at it's scan log, the scan does not happen at all. I don't recall in this setup if it is the organization or repository level scan that normally would have kicked off our build. I tested making a commit to the one branch that the org scan does detect, and that did not initiate a repository level scan. Jesse Glick if you have any suggestions of things to test/try/log or what not I'm all ears. This is having a big impact on our teams workflow. I'll keep digging, but I suspect this is the same issue as described above as the repository level of the GitHub Org folder appears to be the same thing as a multibranch pipeline project. Also worth nothing, when viewing the config at the repository level it has.a scan interval set at 1 day. Since this config is created by the GitHub Org folder, we can't configure it. But despite that, it's not actually running once a day. I'm looking at it right now and the last scan was when we manually kicked it off 2 days ago. Actually, while monitoring the log I saw this when the scan ran: WARNING: JENKINS-45892: reference to jenkins.branch.OrganizationFolder@34cac973[SC] being saved from unexpected /var/lib/jenkins/jobs/SC/computation/computation.xml java.lang.IllegalStateException at hudson.XmlFile.replaceIfNotAtTopLevel(XmlFile.java:223) ... at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.save(FolderComputation.java:208) at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.run(FolderComputation.java:192) at jenkins.branch.OrganizationFolder$OrganizationScan.run(OrganizationFolder.java:911) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) Not sure yet where that puts my issue in relation to the originally reported one, but still commenting in the hope it helps... So I found this in one of my computation.xml files: <?xml version='1.0' encoding='UTF-8'?> <jenkins.branch.OrganizationFolder_-OrganizationScan <actions> <hudson.model.CauseAction> <causeBag class="linked-hash-map"> <entry> <hudson.triggers.TimerTrigger_-TimerTriggerCause/> <int>1</int> </entry> </causeBag> </hudson.model.CauseAction> <io.jenkins.blueocean.service.embedded.BlueOceanUrlAction <blueOceanUrlObject class="io.jenkins.blueocean.service.embedded.BlueOceanUrlObjectImpl"> <mappedUrl>blue/organizations/jenkins/pipelines/</mappedUrl> <modelObject class="hudson.model.Queue$WaitingItem"> <actions> <hudson.model.CauseAction <io.jenkins.blueocean.service.embedded.BlueOceanUrlAction </actions> <id>15319</id> <task class="jenkins.branch.OrganizationFolder" resolves- <fullName>cloudbeers</fullName> </task> <inQueueSince>1515907380276</inQueueSince> <timestamp> <time>1515907380276</time> <timezone>Etc/UTC</timezone> </timestamp> </modelObject> </blueOceanUrlObject> </io.jenkins.blueocean.service.embedded.BlueOceanUrlAction> <io.jenkins.blueocean.service.embedded.BlueOceanUrlAction </actions> <result> <name>SUCCESS</name> <ordinal>0</ordinal> <color>BLUE</color> <completeBuild>true</completeBuild> </result> <durations> <long>21123</long> </durations> <timestamp>1515907380282</timestamp> <duration>21123</duration> </jenkins.branch.OrganizationFolder_-OrganizationScan> That smells very fishy... I go digging further and find: so James Hardwick that looks to be the issue you had and upgrading to BlueOcean 1.4 should fix the strange log messages for you. So the Organization Folder just needs to know that there is at least one branch in the repository in order to create the child multibranch project. Once the child multibranch project has been created the responsibility for building belongs to the child. The organization folder is designed not to burn excessive rate limit checking unnecessary branches (it could be even smarter, e.g. by peeking inside an existing multibranch project and checking the branches in order of most recent build rather than alpha order, but that's a separate issue) Now recently we fixed a bug in Organization Folder, namely there was a side-effect where every scan of an organization folder would always trigger a scan of all the child Multibranch projects (the fix was in JENKINS-48536). The fixed behaviour only triggers the child Multibranch project if the multibranch project was disabled (because there are no more branches with marker files) and has been re-enabled (e.g. at least one branch with a marker file has been introduced). The multibranch project being disabled will ignore all push events and not run periodic scans, so the organization folder needs to take over that responsibility. The bug was that previously it was being over-zealous. Now the question is whether the per-repository scans are taking place or not... but that is really a question for a new issue. In short James Hardwick could you create a new JIRA issue for the issue you are seeing with the multibranch projects in an org folder not being periodically scanned. FYI I just checked my own personal Jenkins instance, here's an org folder and it's most recent scan: And here's a child multibranch project and it's most recent scan: Notice that the times are completely unrelated (which is correct and proper because Periodically unless otherwise ran explicitly does not let you pick the time and instead randomizes the time to level the load) Thus James Hardwick the only real question is was that 1 day or 2 days since either of those ran... if it is 1 day then I think we can close this issue as fixed (after confirming that 1 minute does scan every minute... need to use a git repo for that though, otherwise the rate limit will get in the way) The original one I was referring to last had a commit on a Friday later afternoon. I was reporting it here on Sunday early afternoon, so almost 2 days. New ticket for my issue created at JENKINS-48960 We're seeing the same issue on our Jenkins installation, version 2.89.4. After updating Jenkins and all plugins to latest versions this week (it was actually an old installation), the periodic organization scan stopped triggering at all. I can still manually kickstart this scan, and it works as expected then, but for some reason it doesn't run on its own - and I have no idea how to debug it. If instructed, I will be happy to provide any debugging information. Also seeing this issue on Jenkins ver. 2.89.4 from a clean docker install. All plugins are up to date. The timer never seems to trigger the scan but manual scans work fine. So I think what has happened is that we have uncovered a bug in fixing another bug. Up until "recently" the branch-api plugin's OrganizationFolder functionality would end up triggering a rescan of all child multibranch projects whenever it actually did a rescan. This was a bug, because the intent would be, if you setup the org scan as daily and the multibranch as daily (ignore that you cannot configure the multibranch's period at present) then you don't want all the scans at the same time in the day, rather you want them leveled throughout the day so that if you are using an SCM with an API rate limit, the rate limit usage is leveled across the whole day. By the organization folder triggering the scans of all child multibranch projects, effectively the rate limit usage would be spiked at one time in the day, which is actually a bad thing. So what has made this bug surface is this change which fixed the bug whereby the Organization folder would always trigger a rescan of the child multibranch rather than only triggering a rescan if the scan configuration has changed. Because the Organization folder was no longer driving periodic scans, we now see the bug - which I suspect was always present - of the Multibranch's periodic setting not actually being applied. On my own Jenkins the periodic setting is applied on at least some projects, so this is somewhat inconsistent, but it is incredibly annoying. Hmmm so is responsible to call Trigger.checkTriggers periodically. runs the triggers... but only on ParameterizedJobMixIn.ParameterizedJob (at least since this change) so how is the org folder getting triggered? Ahhh folders takes care of it itself Ok... so can people affected by this issue try running the following script in the system groovy console: for (def p in Jenkins.getInstance().allItems(com.cloudbees.hudson.plugins.folder.computed.ComputedFolder.class)) { for (def t in p.getTriggers().values()) { if (t.tabs == null) println("${p.fullName} trigger ${t.class.name} not started"); } } If you get a report of any of the triggers not being started then that would point to an NPE in FolderCron being the root cause. NOTE: I do not expect the above to be the cause. My Jenkins is on 2.89.4 and on a newly created org folder it is fine as well as on an existing org folder, e.g. see the following three screenshots that illustrate the org folder scan being daily and the multibranch scans being daily - but at different points in time - and I had not logged into my Jenkins in over a week before taking these screenshots! What I do find interesting is this one: There is more than 24h since that one ran. It should run once every 24h, but it seems to be running every 48h. Hmmm H H * * * is the crontab... so if that picks any time before 19:49 then yesterday it would have been less than 24h since the last run and so it would only pick up the run today... presumably it will run at 19:49 today but the last triggered field will be out my some number of milliseconds and so effectively it runs every second interval might fix the incidental issue of "every minute" not running every minute... but it would be really good to understand how others are not getting the child multibranch scan being triggered at least once every two days? OR is it being triggered once every two days, but the org folder is set to a higher interval and thus "longer than every 2 days" is being translated into "never" because people are manually triggering as a workaround to being unable to configure the child triggers We have to scan all our multibranch pipeline repositories separately and cannot configure them (yet) as stated above... Otherwise the branch and PR jobs never get triggered. Also the new Bitbucket webhooks are not supported ( JENKINS-47891) yet so we are kind of stuck as we cannot install extra plugins on Bitbucket Server. This is really annoying. As a workaround, we created a pipeline job as follows: import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject; node { Jenkins.instance.items .findAll { it instanceof WorkflowMultiBranchProject } .each { echo it.name it.indexing.run() } } I set it to trigger to build periodically, every two minutes: H/2 * * * * But, I had to do script approvals. This is temporary for us until this bug gets fixed, after which I'll remove this job and the script approvals. Stephen Connolly, I noticed this was marked as resolved. There are no comments regarding the fix or release. Do you know when we can expect to pick up the new plugin version? And what that version will be? Thanks in advance! Derek. Does it work for 5 minutes? (Aside: Keep in mind that such aggressive scanning will get you rate limited by Bitbucket Cloud, so you probably want to set up webhook notification) If it works for 5 minutes but not for 2 then that could well be a bug in the cloudbees-folders plugin.
https://issues.jenkins-ci.org/browse/JENKINS-47077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2019-47
refinedweb
2,303
54.42
18 Free-Trade Agreements Steppingstones to a More Open World by Daniel T. Griswold Executive Summary Since securing trade promotion cient uses, while leveling the playing authority in 2002, the Bush administra- field for U.S. exporters. tion has launched an aggressive campaign FTAs provide institutional competition to negotiate bilateral and regional free- to keep multilateral talks on track. If other trade agreements (FTAs). FTAs have members of the World Trade Organization been reached with Singapore and Chile become intransigent, the United States and are under negotiation with Australia, must have the option of pursuing agree- Morocco, Bahrain, and nations of the ments with a “coalition of the willing” in Central American Common Market and pursuit of trade liberalization. FTAs can the Southern African Customs Union. spur regional integration and blaze a trail None of those countries is among our through difficult areas for broader negotia- top 10 trading partners, but considered tions in the future. As a foreign policy tool, together, the proposed FTAs would cover a FTAs can cement ties with allies and major segment of U.S. trade. As a group, the encourage countries to stay on the trail of FTA countries would constitute the world’s political and economic reform. ninth largest economy and would be To maximize the benefits of free- America’s sixth largest trading partner. trade agreements, the administration Free-trade agreements deviate from should seek agreements with countries the multilateral principle of nondiscrim- that can provide import competition in ination, and they can divert trade from our domestic market and export oppor- more efficient to less efficient but tunities abroad and that are reform lead- favored import producers. But under the ers in regions of the world where models right conditions, FTAs can inject new of successful reform are most needed. competition into our domestic economy, Judged by those criteria, the FTAs pro- lowering prices for consumers and shift- posed by the Bush administration ing factors of production to more effi- deserve to be pursued. Daniel T. Griswold is associate director of the Center for Trade Policy Studies at the Cato Institute. As a group, the ber of important strategic and foreign policy F TA countries Introduction considerations that cannot be ignored. None of the countries we are negotiating with is would constitute Since final passage of trade promotion author- among our top 10 trading partners, but together the the world’s ninth ity in 2002, the Bush administration has launched FTAs would cover a major segment of U.S. trade. an aggressive campaign to negotiate bilateral and Combined, Singapore, Chile, Australia, Morocco, largest economy regional free-trade agreements (FTAs). In May Bahrain, the Central American Common Market, and would be 2003 the United States signed an FTA with and the Southern African Customs Union have a America’s fourth Singapore, and in June 2003 it signed one with total population of 157 million and annual eco- Chile. Congress could vote on those agreements nomic output equivalent in purchasing-power par- largest export by late summer or early fall. ity to $1.4 trillion. As a group, the FTA countries market. Meanwhile, the administration has begun would constitute the world’s ninth largest economy. negotiating FTAs with Australia; Morocco; They would be America’s seventh largest source of Bahrain; the five Central American Common imported goods and fourth largest export market— Market nations of Guatemala, Honduras, El they sold $42 billion to Americans in 2002 and Salvador, Nicaragua, and Costa Rica; and the five bought $45 billion in American-made goods. In Southern African Customs Union nations of terms of two-way trade, the group would be South Africa, Botswana, Namibia, Lesotho, and America’s sixth largest trading partner, behind only Swaziland. Negotiations initiated in 1994 contin- Canada, Mexico, Japan, China, and Germany. ue with 33 other Western Hemisphere nations on Eliminating barriers to trade with so many people a Free Trade Area of the Americas. On May 9 would be a positive step for U.S. trade policy. President Bush proposed “the establishment of a This paper examines the merits of negotiating U.S.–Middle East free-trade area within a free-trade agreements. It analyzes both the eco- decade, to bring the Middle East into an expand- nomic and noneconomic implications of FTAs, ing circle of opportunity, to provide hope for the weighs the costs and benefits of the specific people who live in that region.”1 Egypt could join agreements put forward by the Bush administra- Morocco and Bahrain on the list of potential tion in light of those implications, and proposes FTA partners. Other potential FTA partners guidelines for future negotiations to maximize the proposed by members of Congress include benefits and minimize the costs to both the U.S. Taiwan, New Zealand, and the United Kingdom. economy and our broader national interests. Those agreements already negotiated or in On balance, the bilateral and regional agree- the pipeline are sure to spark the usual debate ments proposed by the Bush administration about free trade versus fair trade, environmental would further our national interests. If crafted standards and working conditions in poor coun- properly, those agreements would strengthen the tries, jobs and wages in the United States, and U.S. economy by injecting new import competi- the other issues that inevitably swirl around any tion into domestic markets and opening markets trade agreement before Congress.2 But bilateral abroad more widely to U.S. exports. More impor- and regional trade agreements also raise a pecu- tant, they would encourage economic reform liar set of policy issues, economic and noneco- abroad and cement economic and foreign policy nomic alike, that are generally neglected when ties between the United States and key allies. deals are debated and voted on. Even for supporters of trade expansion, not every bilateral and regional free-trade agree- The Peculiarities of FTAs ment proposed is necessarily good economic policy. Despite the name, free-trade agree- For anyone who supports free trade, support ments do not always promote more trade, nor for free-trade agreements would at first glance do they necessarily leave parties to the agree- seem to be automatic. Such agreements by defi- ment or the rest of the world better off. Beyond nition lower barriers to trade between partici- the economic ambiguities of FTAs are a num- pants, and lowering or eliminating barriers alto- 2 gether has been the aim of the whole trade lib- began in the 1950s as the six-member eralization movement. Yet regional and bilateral European Economic Community. Other well- trade agreements raise legal and economic ques- known FTAs or customs unions among WTO tions that should be addressed. members are the European Free Trade Association, the North American Free Trade Departing from Multilateral Trade Agreement, the Southern Common Market, FTAs are an exception to the basic legal prin- the Association of Southeast Asian Nations ciple of nondiscrimination in international trade. Free Trade Area, and the Common Market of Article III of the basic charter the World Trade Eastern and Southern Africa. Organization (the General Agreement on Tariffs In fact, free-trade agreements have been and Trade 1947 as amended by the 1994 Uruguay proliferating among WTO members. Today Round Agreement) declares as a fundamental more than 150 such agreements are in effect, principle that market access should be extended and the trend has been accelerating in the last to all members on a most-favored-nation, or decade. In the first 46 years of the GATT, nondiscriminatory, basis. Specifically, “any advan- between 1948 and 1994, 124 such agreements tage, favor, privilege or immunity granted by any were signed (many of which have since contracting party to any product originating in or expired), an average of 2.7 per year. Since 1995 destined for any other country shall be accorded the WTO has been notified of 130 such agree- More and more immediately and unconditionally to the like ments, an average of more than 15 per year.10 WTO members product originating in or destined for the territo- Today an estimated 43 percent of internation- are choosing to ries of all other contracting parties.”3 al trade occurs under free-trade agreements, Of course, FTAs explicitly deviate from that and that share would reach 55 percent if agree- negotiate FTAs. principle. They grant an advantage (lower or zero ments currently being negotiated worldwide The question for tariffs) to parties to an agreement that are not were to be implemented. 11 granted to other members of the WTO that are Despite Article I, free-trade agreements are a U.S. trade policy is not parties to the agreement. But free-trade legal fact of life in international trade. More and whether we should agreements and customs unions, when properly more WTO members are choosing to negotiate join or resist the crafted, are consistent with GATT rules. FTAs. The question for U.S. trade policy is When the GATT was originally signed in whether we should join or resist the trend. trend. 1947, its founding members carved out an excep- tion for free-trade areas. Article XXIV of the The Messy Economics of FTAs GATT allows customs unions or free-trade The economics of FTAs is more ambiguous agreements between members,4 recognizing “the than the legalities. Even though FTAs by def- desirability of increasing freedom of trade by the inition result in lower trade barriers between development, through voluntary agreements, of member countries, they do not necessarily closer integration between the economies of the result in economic gains for all members or the countries parties to such agreements.”5 Such world as a whole. agreements are allowed provided they (1) do not Economists have been investigating this result in higher trade barriers overall for WTO phenomenon since 1950, when Jacob Viner members outside the agreement,6 (2) eliminate published his seminal study, The Customs “duties and other restrictive regulations of com- Union Issue.12 Viner noted that customs unions merce” on “substantially all the trade between the can promote new trade among members, but constituent territories . . . in products originating they can also divert trade from more efficient in such territories,”7 and (3) do so “within a rea- producers outside the agreement. sonable length of time.”8 Article XXIV can be If signed with a low-cost foreign producer, waived entirely by a two-thirds vote of WTO an agreement can result in trade creation by members. 9 allowing the low-cost producer to enter the The most obvious exception under Article domestic market tariff free, reducing domestic XXIV has been the European Union, which prices, and displacing higher-cost domestic pro- 3 ducers. But if signed with a relatively high-cost ally predominates under free-trade agreements. foreign producer, an agreement can result Settling that dispute definitively is beyond the scope merely in trade diversion by allowing the high- of this paper.14 But we do know that the evidence is er-cost importer to displace lower-cost foreign mixed and that the short-term, static economic importers simply because producers in the new impact of a free-trade agreement is only one factor FTA partner can import tariff free. As Viner in deciding whether a particular FTA meets the test concluded, customs unions are likely to yield of good public policy. The possibility of trade diver- more economic benefit than harm when “they sion is not sufficient reason to reject the Bush are between sizeable countries which practice administration’s policy of pursuing FTAs. substantial protection of substantially similar [that is, competing] industries.”13 To maximize trade creation, FTAs should How FTAs Advance Trade unleash real competition in previously protected Liberalization markets. From an economic perspective, the essential purpose and principal payoff of inter- Even if trade diversion occurs, free-trade agree- national trade is expanded competition within ments can advance the goals of expanding free mar- the domestic economy and expanded markets kets, individual liberty, and more peaceful coopera- abroad for domestic producers. Increased import tion among nations. In addition to their short-term competition results in lower prices for consum- economic effects, free-trade agreements can ing households and businesses, more product advance American interests in several ways. choice, higher quality, and increased innovation. By stimulating more efficient production, A Safety Valve for the Multilateral System import competition increases the productivity of One, FTAs provide an important safety valve workers, real wages, living standards, and the if multilateral negotiations become stuck—an all- long-run growth of the economy. too-real possibility. Multilateral negotiations If an FTA does not result in lower prices for through the GATT and now the WTO can be the importing country but merely reshuffles long, tortuous, and uncertain. Since the Kennedy imports from the rest of the world to FTA part- Round concluded in 1967, only two other com- ners, the importing country can suffer a welfare prehensive multilateral agreements have been The best free-trade loss. Its government loses tariff revenue, but its reached—the Tokyo Round Agreement in 1979 consumers do not reap any gain from lower and the Uruguay Round Agreement in 1994. agreements allow a prices. In effect, the importing country’s treasury And because of the need for consensus, it takes large and competi- subsidizes less efficient production in the partner only one of the 146 nations in the WTO to scut- tive foreign pro- country. If global prices outside the FTA fall tle a new agreement. because of the diverted demand, then the rest of To cite one plausible scenario, the French gov- ducer to displace the world loses from lost producer surplus. ernment could prevent completion of a Doha domestic producers To minimize trade diversion, the best FTAs Round Agreement because of its long-standing allow a large and competitive foreign producer objections to liberalization of agricultural trade. in a large and to displace domestic producers in a large and Negotiators have already missed a March 31, 2003, protected domestic protected domestic market, thus delivering deadline for preliminary agreements on agriculture, market, thus lower prices and higher real incomes to workers and doubt is widespread that the round will be con- and families. The worst allow less competitive cluded by 2005 as agreed in the 2001 agreement delivering lower foreign producers to replace more competitive that launched it. The Uruguay Round, it should be prices and higher foreign producers in a large and protected remembered, almost foundered on the subject of domestic market, costing the treasury tariff rev- agriculture. Given the history of multilateral nego- real incomes to enue without delivering lower domestic prices or tiations, it would be unwise to put all of our tradable workers and more efficient domestic production. eggs in the Doha Round basket. families. Free-trade economists argue among themselves FTAs provide institutional competition to about whether trade creation or trade diversion usu- keep multilateral talks on track. If other WTO 4 members become intransigent, the United with Chile. According to the National Association Negotiating FTAs, States should have the option of pursuing of Manufacturers, U.S. exporters have lost market or at least retaining agreements with a “coalition of the willing” in share in Chile since its government began to pursuit of trade liberalization. Negotiating aggressively pursue free-trade agreements with its the option to do so, FTAs, or at least retaining the option to do so, non-U.S. trading partners in 1997. Especially hard- can send a signal to can send a signal to other WTO members hit by the tariff differential have been U.S. exports other WTO that, if they are unwilling to negotiate serious- to Chile of wheat, soybeans, corn, paper products, ly to reduce trade barriers, we retain the right plastics, fertilizers, paints and dyes, and heating and members that, if to find bilateral and regional partners who will. construction equipment.15 All those sectors have they are unwilling Knowing that WTO members, including the seen their market share drop significantly in the United States, can pursue FTAs outside the absence of a U.S.-Chile free-trade agreement. to negotiate multilateral process can focus the minds and seriously to reduce wills of negotiators to reach an agreement. Institutionalizing Reforms Abroad trade barriers, we Fears that FTAs could divert attention from the Three, FTAs can help less-developed coun- multilateral track are unfounded. Most WTO tries lock in and institutionalize ongoing econom- retain the right to members that have pursued regional and bilateral ic reforms. A signed agreement prevents nations find bilateral and FTAs have not abandoned their commitment to from backsliding in times of economic or political multilateral negotiations. The U.S. government duress. Agreements assure foreign investors that regional partners signed agreements with Israel, Canada, and Mexico reforms mark a permanent commitment to liber- who will. during the Uruguay Round negotiations from 1986 alization. For example, when Mexico suffered its to 1994 without reducing its commitment to a final peso crisis in 1994–95, its NAFTA commitments multilateral agreement. And there is no evidence kept its market open to U.S. exports. The assur- that pursuit of FTAs today has distracted the Bush ance of an FTA also works the other way, guaran- administration from the ongoing Doha Round of teeing that exporters in the partner country will WTO negotiations. Indeed, U.S. Trade Represen- enjoy duty-free access to the large American mar- tative Robert Zoellick has been leading the charge ket. By signing an FTA with the United States, in the Doha Round with aggressive proposals to less-developed countries signal to the rest of the liberalize global trade in manufactured goods, agri- world that they are serious about embracing glob- cultural products, and services. al competition. That signal, combined with access to the U.S. market, can help to attract foreign A Level Playing Field for U.S. Exporters investment and spur faster development. Two, FTAs can level the playing field for U.S. exporters who have been put at a disadvan- Blazing a Trail for Broader Negotiations tage by free-trade agreements that do not Four, FTAs can provide useful templates for include the United States. The United States is broader negotiations. As the members of the party to only 3 of the 150 or so FTAs currently WTO grow in number and diversity, reaching in force around the world—NAFTA and bilat- consensus among all 146 members becomes eral agreements with Israel and Jordan. Even more difficult. Negotiators can be forced to con- though American producers may be the most sider only the lowest common denominator efficient in the world in a certain sector, our acceptable to all members. Negotiating with only exporters may not be able to overcome the one country or a small group of like-minded advantage of rival foreign producers who can countries can allow more meaningful liberaliza- export tariff free to countries with which their tion in areas such as sanitary and phytosanitary governments have signed an FTA. (i.e., animal and plant) regulations, technical bar- In Chile, for example, U.S. exporters encounter riers to trade, service trade and investment, elec- a uniform 6 percent tariff. Competing exporters in tronic commerce, customs facilitation, labor and the European Union, Canada, and Brazil, in con- environmental standards, dispute settlement, and trast, sell duty-free in the same market because their market access for politically sensitive sectors. governments have signed free-trade agreements Those agreements, in turn, can blaze a trail 5 for wider regional and multilateral negotia- ments are now awaiting action by Congress. Both tions. The U.S.-Chile FTA provides an exam- are comprehensive, covering not only market ple of how to incorporate labor and environ- access for goods, including farm products, but mental standards into the text of an agreement also services, investment, intellectual property, without threatening to hold trade hostage to and nontariff barriers. Neither agreement rich-country demands for higher standards in excludes any goods from free trade, although they less-developed countries. FTAs can provide provide phase-in periods of as long as 12 years for creative solutions to sticky political problems certain politically sensitive products. that can then be adapted in other agreements. What follows is a brief survey of the major strengths and weaknesses of the two completed Internal Competition and Integration agreements now awaiting congressional approval.16 Five, FTAs can spur internal reform and con- solidation within member states, enhancing eco- The U.S.-Chile FTA nomic growth and support for more liberalization. The United States and Chile have been dis- By encouraging regional integration, FTAs hasten cussing a free-trade agreement for almost a the consolidation of production within the FTA, decade. Of all the countries of Latin America, increase economies of scale, and create a more Chile has traveled the furthest along the road Of all the countries integrated production process. Consolidation may of economic reform. Since the 1970s its gov- of Latin America, be most pronounced in more heavily protected ernment has liberalized trade and foreign Chile has traveled service sectors such as telecommunications, finan- investment, cut taxes and regulations, and pri- cial services, and transportation. More efficient vatized its pension system. According to the the furthest along industries and infrastructure can yield dynamic Economic Freedom of the World survey, Chile’s the road of eco- gains year after year, boosting growth, investment, economy is one of the freest in the world, rank- and demand for imports from FTA partners as ing 15th out of 123 economies rated. 17 nomic reform. A well as the rest of the world. Chileans have reaped the rewards. Until the free-trade agree- For all those reasons, the Bush administration’s recent global slowdown, economic growth in Chile ment would, among agenda of negotiating free-trade agreements is had averaged more than 8 percent a year since the worth pursuing. Under the right conditions, FTAs late 1980s.18 The share of people living in poverty other benefits, can inject new competition into our domestic has been cut in half.19 Chile is the only South recognize Chile’s economy, lowering prices for consumers and shift- American country that has earned an investment commitment to free ing factors of production to more efficient uses, grade on its government bonds. As other Latin while leveling the playing field for U.S. exporters. American countries have suffered through political trade and free Beyond those immediate benefits, FTAs can pro- and economic turmoil, Chile has been a model of markets. vide institutional competition for multilateral talks, stability. A free-trade agreement with Chile would, spurring integration among FTA countries and among other benefits, recognize Chile’s commit- liberalization abroad and blazing a trail through ment to free trade and free markets. difficult areas for broader negotiations in the future. The strength of the U.S.-Chile FTA is its As a foreign policy tool, FTAs can cement ties comprehensiveness. No sector is excluded from with allies and encourage countries to stay on the liberalization. The Office of the U.S. Trade trail of political and economic reform. Representative notes that 87 percent of two- way trade in goods will be tariff free upon enactment of the agreement, with most of the On the Docket: Singapore remaining tariffs and quotas eliminated after and Chile four years. 20 The qualifier is that liberalization for the sectors most likely to provide the most On May 6, 2003, the Bush administration vigorous import competition—and hence pro- signed a free-trade agreement with the govern- voke the most political reaction—has been ment of Singapore, and on June 6 it signed an postponed for up to 12 years. FTA with the government of Chile. Both agree- The full economic benefits of the U.S.-Chile 6 FTA will be delayed by the relatively slow phaseout of our most damaging trade barriers. Chile For example, the amount of sugar that can be Population (2002): imported from Chile is set at a small 2,000 met- 15.5 million ric tons a year and is allowed to grow by only 5 Economy (2001, purchasing power parity): percent per year during the extended phaseout $153 billion GDP period. Quotas on imports of highly protected $10,000 GDP per capita commodities such as butter, milk powder, and Economic Freedom (1999): cheese are maintained for 7 to 12 years. Beef 8.0 (out of 10.0), ranks 15th quotas are maintained for 3 years.21 U.S. Services Imports from Chile (2001): Quotas on avocados, one for imports from $840 million U.S. Services Exports to Chile (2001): January through September, another for imports $1.31 billion from October through December, expand 5 per- U.S. Goods Imports from Chile (2002): cent per year before expiring after 12 years. $3.78 billion Poultry, tires, copper, and “hotel and restaurant Top Imports from Chile (2002): china” will all be subject to tariffs for up to 10 Fruits, frozen juices ($828 million) years after enactment of the agreement. No cuts Copper ($723 million) will be made in tariffs on imported wine from Fish/shellfish ($508 million) Chile for the first 7 years of the agreement and Shingles/wallboard ($341 million) will be eliminated entirely only after 12 years. So Lumber ($172 million) any trade advocates who want to toast enactment Chemicals–organic ($139 million) of the agreement with imported Chilean wine Wine ($138 million) will still need to pay a tariff for the experience. U.S. Goods Exports to Chile (2002): $2.61 billion From the mercantilist point of view, accord- Top Exports to Chile (2002): ing to which imports are the price a nation pays Computer accessories ($222 million) for the privilege of exporting, postponing the Excavating machinery ($171 million) liberalization of more competitive imports Industrial engines ($97 million) such as wine will be touted as a selling point of Computers ($95 million) the agreement. But in terms of America’s Other household goods ($88 million) national welfare, postponement of liberaliza- Materials handling equipment ($81 million) tion only delays the economic payoff of the Chemicals–other ($74 million) agreement. Those delays increase the odds at U.S.-Owned FDI in Chile (end of 2001): least in the short run that trade diversion will $11.7 billion predominate over trade creation. Sales by U.S.-Owned Affiliates in Chile (2000): On services, the U.S.-Chile FTA is far-reach- $3.1 billion ing and breaks new ground. In general, the FTA guarantees Americans the right to sell services across the border and to establish, acquire, and ing, adult education, and professional services. operate investments in Chile on an equal footing The Coalition of Service Industries, the main with domestic and other foreign investors, while American lobbying group for trade liberaliza- extending reciprocal rights to Chilean service tion in services, pronounced the agreement a providers. The agreement wisely incorporates a “milestone” that will set a high standard for “negative list” approach: all sectors are liberalized future trade agreements that seek to liberalize unless specifically excluded. what remains a highly protected and regulated Specifically, the agreement will fully open the international services sector. According to CSI, Chilean market to such competitive U.S. service the U.S.-Chile FTA is “the first trade agreement sectors as tourism, advertising, computers and that has ever committed another country to telecommunications, construction and engi- apply the same high standards of regulatory neering, express delivery, distribution and retail- transparency that we enjoy in the U.S.”22 7 On the related matter of capital controls, the specific level of environmental and labor regulations agreement affirms and protects the right of foreign that would be inappropriate for a country at Chile’s investors to enjoy the same rights as domestic com- stage of development. It does require that existing panies. Investors are guaranteed the right, among regulations be enforced and not “weakened” to gain others, to repatriate profits and capital and to do so an alleged advantage in promoting exports or “in a freely usable currency at the market rate of attracting investment. Second, the rules prohibit a exchange prevailing on the date of transfer.”23 sustained pattern of violations, not isolated inci- In a problematic clause, the agreement does dents. Third, the dispute settlement provisions grant the Chilean government the right to emphasize consultation over litigation and mone- impose capital controls on short-term flows tary fines over trade sanctions. The advantage of under certain conditions. Investors who lose fines is that they do not undermine the central pur- money if their funds are “substantially impeded” pose of the agreement, which is wealth creation would be able to use the dispute settlement through expanding trade. mechanism to recover damages after a cooling- off period of 6 to 12 months.24 Ideally, capital The U.S.-Singapore FTA should be as free to flow across international Tiny Singapore, population 4.5 million, borders as goods or services,25 but even with this boxes far above its weight in the global economy. exception, the agreement grants stronger trans- The island entrepôt ranks 16th in the world as a fer rights to investors than are currently granted source of imports to the United States and 11th under the International Monetary Fund as a destination for U.S. exports. It is also one of Articles of Agreement, the General Agreement the top destinations for U.S. direct foreign on Trade in Services, and the General investment abroad. The Singapore economy is Agreement on Tariffs and Trade. one of the freest and most open in the world, Another problematic section allows the impo- which largely explains why its citizens enjoy a sition of fines and other punishments if either first-world standard of living. party fails to adequately enforce labor and envi- Singapore is the leading free-trade nation in Singapore is the ronmental standards. The agreement asserts, “A its region. An FTA with Singapore would Party shall not fail to effectively enforce its labor strengthen America’s economic ties to Southeast leading free-trade laws, through a sustained or recurring course of Asia and create opportunities for U.S. investors nation in its region. action or inaction, in a manner affecting trade and exporters in a populous and economically between the Parties, after the date of entry into promising region of the world. Southeast Asia is An FTA would force of this Agreement.” The agreement contin- especially strategic because it is home to 20 per- strengthen ues: “The Parties recognize that it is inappropri- cent of the world’s Muslims. The government of America’s economic ate to encourage trade or investment by weaken- Singapore has been a steadfast ally of the United ing or reducing the protections afforded in States in the war against international terrorism. ties to Southeast domestic labor laws.”26 (The chapter on the envi- The U.S.-Singapore Free Trade Agreement Asia and create ronment contains almost identical language.)27 commits both nations to comprehensive liber- Failure to comply can result in monetary fines alization of bilateral trade in goods and services opportunities for and, eventually, trade sanctions. and guarantees protection of foreign invest- U.S. investors and Supporters of free trade rightly worry that ment. Even though tariffs on most trade exporters in a enforcing labor and environmental standards between the United States and Singapore are through trade agreements could create a lofty- already low or have been eliminated, the agree- populous and sounding pretext for protectionism. And by raising ment will guarantee duty-free access. economically trade barriers as a method of enforcement, such With a few of the usual exceptions (beef, promising region of arrangements can actually undermine the growth dairy products, and sugar among them), the and development that make higher standards pos- United States commits itself upon enactment the world. sible.28 But the U.S.-Chile FTA contains several of the agreement to reduce tariffs on goods layers of protection to guard against abuse of those imported from Singapore to zero. Tariffs on provisions. First, the agreement does not require a products exempted from immediate liberaliza- 8 tion will be phased out within 10 years.29 Singapore’s commitment on goods liberaliza- Singapore tion is a model of elegance and simplicity: The Population (2002): few categories of goods subject to duties “will 4.5 million be duty free on the date this Agreement enters Economy (2001, purchasing power parity): into force.” There are no exceptions.30 $106 billion GDP The agreement guarantees market access for a $24,700 GDP per capita broad range of service sectors, whether the service Economic Freedom (1999): is delivered across the border or by investing in a 9.3 (out of 10.0), ranks 2nd local presence. Like the U.S.-Chile FTA, the U.S. Services Imports from Singapore (2001): agreement follows the “negative list” approach of $2.01 billion U.S. Services Exports to Singapore (2001): liberalizing all sectors except an explicit few. $4.08 billion Among the more competitive U.S. service sectors U.S. Goods Imports from Singapore (2002): that will enjoy nondiscriminatory treatment in $14.8 billion Singapore under the agreement are tourism, Top Imports from Singapore (2002): express delivery, telecommunications, advertising, Computer accessories ($7.30 billion) construction and engineering, and financial ser- Semiconductors ($1.27 billion) vices, including banking and insurance. Pharmaceutical preparations ($1.21 billion) Specifically, Americans will be allowed to own U.S. goods returned ($881 million) and operate full-service banks in Singapore with- Telecommunications equipment ($429 million) in 18 months of enactment, and unlimited Medicinal equipment ($408 million) branches and automatic teller machines within Chemicals–organic ($357 million) two years. U.S. insurance firms will be able to U.S. Goods Exports to Singapore (2002): $16.2 billion offer full services, including the supply of insur- Top Exports to Singapore (2002): ance across the border from the United States. 31 Semiconductors ($2.18 billion) The agreement guarantees the right of cross- Civilian aircraft ($2.15 billion) border investors to enjoy secure property rights Computer accessories ($1.32 billion) and nondiscriminatory treatment. The agreement Industrial machines, other ($765 million) removes certain performance-related restrictions Electric apparatus ($670 million) on foreign investment and guarantees the right to Engines–civilian aircraft ($590 million) repatriate capital and profits. Like the agreement Fuel oil ($570 million) with Chili, it acknowledges that capital controls Measuring, testing, and control instruments can be imposed under certain circumstances but ($534 million) also requires that compensation be paid to injured U.S.-Owned FDI in Singapore (end of 2001): parties if the controls “substantially impede trans- $27.3 billion Sales by U.S.-Owned Affiliates in Singapore (2000): fers” of funds or remain in place for more than $5.4 billion 364 days. 32 As are those of the FTA with Chile, such provisions are less than ideal, but they do not undermine what is in fact an almost total liberal- the agreement.33 This is not a loophole but recog- ization of capital flows. nition that producers in Singapore have already One innovative feature of the agreement is a integrated suppliers in Indonesia into their pro- provision allowing producers in Singapore to duction process. The goods covered by the ISI source more labor-intensive work to special zones already enter the United States duty-free even in neighboring Indonesia. That provision, called without the U.S.-Singapore FTA, so no new trade the Integrated Sourcing Initiative, allows certain privileges are being granted to Indonesia. finished products and intermediate inputs pro- Allowing Indonesia to contribute to the produc- duced in the Indonesian islands of Bintan and tion process will help to raise worker productivity Batam to be exported to the United States as if and overall living standards in that country as well they were of Singaporean origin for benefits under as in Singapore and the United States. 9 The U.S. govern- competition into the U.S. domestic market, creat- ment should resist Next in Line: Australia, ing rather than merely diverting trade and deliv- domestic political Morocco, Central America, ering the competition and lower prices that are the principal payoff of trade. pressure to exclude and Southern Africa Investment and services will be an important or delay liberal- part of an FTA with Australia. Like Singapore, In line behind the two completed agreements Australia is not only a major trading partner but ization of Aus- are negotiations with a dozen other potential FTA also a major investment partner of the United tralian imports that partners. The Bush administration has begun States. At the end of 2001 U.S. companies owned negotiations with Australia, Morocco, Bahrain, $34 billion in direct investment in Australia35 and are most competi- the five members of the Central American U.S.-owned affiliates sold $15 billion of services in tive in the U.S. Common Market, and the five members of the Australia in 2000.36 An FTA would more secure- market. Southern African Customs Union. What follows ly protect the rights of U.S. investors to establish a is a brief analysis of the major opportunities and commercial presence in Australia. challenges that are likely to arise during the nego- In negotiating an FTA with Australia, the tiation of the individual agreements.34 U.S. government should resist domestic politi- cal pressure to exclude or delay liberalization of Australia Australian imports that are most competitive Of all the free-trade agreements in the in the U.S. market and hence most politically pipeline, the one being negotiated with Australia sensitive. The Australian government, for its is arguably the most economically meaningful. part, should accommodate legitimate concerns Australia is a major producer of products that are about its sanitary and phytosanitary regime on protected in the U.S. market, such as beef, sugar, agricultural imports and its government-run dairy products, and wine. Enactment of a com- Australian Wheat Board, which acts as a cen- prehensive FTA with Australia would inject real tral market for Australian wheat. Australia Population (2002): Passenger cars, new, used ($311 million) 19.5 million Steel-making materials ($217 million) Economy (2001, purchasing power parity): U.S. Goods Exports to Australia (2002): $466 billion GDP $13.08 billion $24,000 GDP per capita Top Exports to Australia (2002): Economic Freedom (1999): Civilian aircraft ($2.66 billion) 8.5 (out of 10.0), ranks 6th Computer accessories ($464 million) U.S. Services Imports from Australia Pharmaceutical preparations (2001): ($421 million) $3.50 billion Telecommunications equipment U.S. Services Exports to Australia (2001): ($401 million) $4.70 billion Automotive parts and accessories U.S. Goods Imports from Australia (2002): ($400 million) $6.48 billion Excavating machinery ($310 million) Top Imports from Australia (2002): Industrial machines, other ($303 Meat products ($1.08 billion) million) Wine and related products ($461 U.S.-Owned FDI in Australia (end of million) 2001): U.S. goods returned ($445 million) $34.0 billion Crude oil ($433 million) Sales by U.S.-Owned Affiliates in Australia Chemicals–inorganic ($322 (2000): million) $14.9 billion 10 Central American Common Market A free-trade agreement with the five mem- Central American Common Market bers of the Central American Common Members: Market would be a logical geographical exten- Costa Rica, El Salvador, Guatemala, Hondu- sion of the already established North American ras, Nicaragua Free Trade Agreement. It would open protect- Population (2002): ed markets to more vigorous competition and 35.1 million encourage economic reform and stability in Economy (2001, purchasing power parity): what has been an especially troubled region of $138 billion GDP the Western Hemisphere. $4,000 GDP per capita The combined economic output of CACM Economic Freedom (1999): members is small, less than one-third the size of Costa Rica: 7.8 (out of 10.0), ranks 24th El Salvador: 7.9 (out of 10.0), ranks 20th Australia’s GDP. But their two-way trade with Guatemala: 6.7 (out of 10.0), ranks 56th the United States is relatively large because of Honduras: 6.6 (out of 10.0), ranks 60th proximity and their comparative advantage in Nicaragua: 7.5 (out of 10.0), ranks 34th products popular in the U.S. market, such as U.S. Services Imports from CACM (2001): apparel, semiconductors, bananas, and coffee. N/A An FTA would guarantee access for U.S. Services Exports to CACM (2001): Central American producers to the otherwise N/A heavily protected U.S. market for imported U.S. Goods Imports from CACM (2002): apparel and textiles. Producers in the five $11.9 billion member countries already send more than $7 Top Imports from CACM (2002): billion in cotton clothing and textile imports to Apparel, household goods–cotton ($4.95 billion) the United States.37 Granting those imports Apparel, textiles, nonwool or cotton ($1.93 billion) Fruits, frozen juices ($1.05 billion) permanent duty-free access will be politically Semiconductors ($449 million) sensitive in the United States, most predictably Green coffee ($373 million) with the protectionist textile lobby. Medicinal equipment ($340 million) Rejecting the agreement because of textile Fish and shellfish ($324 million) and apparel imports would not serve our U.S. Goods Exports to CACM (2002): national interest and would be shortsighted for $9.84 billion the textile industry itself. Textiles are one of the Top Exports to CACM (2002): major exports from the United States to Apparel, household goods–textile ($1.61 billion) Central America. In 2002 the top U.S. export Semiconductors ($848 million) category to the CACM countries was textiles Cotton fiber cloth ($820 million) for household apparel, and four of the top Finished textile supplies ($352 million) seven export categories were textiles or cloth.38 Plastic materials ($343 million) Newsprint ($334 million) By opening our market to finished apparel Manmade cloth ($331 million) from Central America, we would likely be U.S.-Owned FDI in CACM (end of 2001): encouraging the export of textiles and other N/A semifinished inputs from the United States. Sales by U.S.-Owned Affiliates in CACM (2000): Because of the region’s widespread poverty, N/A opponents of the FTA will argue that trade will encourage “sweatshops” and will not adequately protect environmental and labor standards. Such with good benefits and working conditions. By arguments ignore ample evidence that trade and refusing to grant market access because of lower development make higher standards possible. environmental and labor standards, the United What is considered a “sweatshop” by American States would be depriving poor workers abroad standards can, to workers in less-developed of one of the most powerful engines for raising countries, represent relatively good-paying jobs their overall living standards.39 11 An FTA with An FTA with Central American countries agreement should rapidly eliminate tariffs and Morocco could would advance important U.S. foreign policy quotas on each country’s goods that are most goals. During the 1980s the region was torn by competitive in the other’s market. Moroccan help to build a core war and civil strife stoked in part by communist exports in that category would be fruits such as of Arab nations insurgents. Today all five members are fledgling clementines and textile and apparel goods. democracies that have expanded the economic A trade agreement with Morocco could that would support freedom of their citizens as they have expand- duplicate the success of the U.S.-Jordan Free trade liberalization ed their political and civil freedoms. A compre- Trade Agreement enacted in 2001. That FTA and economic hensive free-trade agreement with the United has had a measurable impact on the ability of States would recognize and reward that Jordanian producers to sell in the U.S. market: integration with the progress. It would help those nations to reduce Jordan’s exports to the United States jumped global economy— poverty and strengthen the foundations of free- more than 10-fold from $31 million in 1999 to and eventually dom and representative government. $412 million in 2002.40 By far the largest cate- gory of exports has been textile and apparel create more favor- Morocco goods, a labor-intensive and thus competitive able conditions for Morocco is not a major U.S. trading partner, industry for many developing countries.41 but it is considered a moderate and pro-reform The FTA has also helped to transform Jordan political and civil regime within the Arab world. Like the U.S.- into a proponent of free trade and globalization freedom in the Jordan FTA enacted in 2001, an FTA with among Arab countries. An FTA with Morocco Muslim world. Morocco would be more of a foreign policy than could help to build a core of Arab nations that an economic initiative for the United States. would support trade liberalization and economic Because Morocco’s economy is relatively integration with the global economy—and even- small, the economic impact of the agreement tually create more favorable conditions for politi- will be much greater on the North African cal and civil freedom in the Muslim world. nation than on the United States. To extend After Morocco, the two most likely prospects the maximum benefit to both countries, any in the Middle East for free-trade agreements are Morocco Population (2002): ($35million) 31.1 million Apparel, textiles, nonwool or cotton Economy (2001, purchasing power parity): ($34 million) $112 billion GDP Vegetables ($30 million) $3,700 GDP per capita Fish and shellfish ($18 million) Economic Freedom (1999): Fruits, frozen juices ($13 million) 6.2 (out of 10.0), ranks 72nd U.S. Goods Exports to Morocco (2002): U.S. Services Imports from Morocco $565 million (2001): Top Exports to Morocco (2002): N/A Civilian aircraft ($281 million) U.S. Services Exports to Morocco Corn ($39 million) (2001): Soybeans ($35 million) N/A Oilseeds, food oils ($16 million) U.S. Goods Imports from Morocco Wheat ($11 million) (2002): Animal feeds ($10 million) $392 million Engines–civilian aircraft ($10 million) Top Imports from Morocco (2002): U.S.-Owned FDI in Morocco (end of 2001): Semiconductors ($96 million) N/A Sulfur and nonmetallic minerals Sales by U.S.-Owned Affiliates in Morocco ($85 million) (2000): Apparel, household goods–cotton N/A 12 Bahrain and Egypt. Bahrain, although small in size and population, has become a key financial Southern African Customs Union and petroleum-processing center in the Persian Members: Gulf. Economic reforms have created the most Botswana, Lesotho, Namibia, South Africa, Swazi- liberalized economy in the Arab world (ranking land 28th in economic freedom out of 123 countries Population (2002): worldwide). Compared to Bahrain, Egypt is 50.3 million much larger in population (71 million vs. Economy (2001, purchasing power parity): 656,000) and in economic output ($258 billion $442 billion GDP vs. $8.4 billion), but its economy is less liberalized $8,800 GDP per capita (ranking 52nd in economic freedom). Economic Freedom (1999): Botswana: 6.9 (out of 10.0), ranks 50th Namibia: 6.9 (out of 10.0), ranks 50th Southern African Customs Union South Africa: 7.0 (out of 10.0), ranks 46th The countries of the Southern African U.S. Services Imports from South Africa (2001): Customs Union are a rare success story on a $891 million continent where economic stagnation and U.S. Services Exports to South Africa (2001): political upheaval are the norm. Per capita $1.22 billion GDP within the union is far higher than in the U.S. Goods Imports from SACU (2002): rest of sub-Saharan Africa, and democracy has $4.55 billion gained a more secure foothold. Top Imports from SACU (2002): The powerhouse within the SACU is South Precious metals ($1.18 billion) Africa, which accounts for more than 90 percent of Gem diamonds ($504 million) the union’s GDP and trade with the United States. Apparel, household goods–cotton ($482 million) Passenger cars, new, used ($267 million) An FTA with SACU members would fully open Steel-making materials ($260 million) this important African market to U.S. exports of Nonferrous metals, other ($218 million) goods and services. It would safeguard U.S. direct Automotive parts, accessories ($148 million) investments, which are already a considerable $3 U.S. Goods Exports to SACU (2002): billion in South Africa.42 Total two-way trade in $2.63 billion goods and services between the United States and Top Exports to SACU (2002): SACU members in 2002 was about $9 billion, Civilian aircraft ($250 million) comparable to U.S. trade with Chile.43 Passenger cars, new and used ($135 million) Just as important, an FTA would guarantee Chemicals–organic ($111 million) SACU producers duty-free access to the U.S. Petroleum products, other ($105 million) market. One of the many hurdles that have Excavating machinery ($88 million) confronted Africans is the generally high level Chemicals–other ($72 million) Plastic materials ($70 million) of trade barriers in rich countries to those U.S.-Owned FDI in South Africa (end of 2001): products—namely textiles, apparel, and agri- $3.0 billion cultural products—that Africans are compara- Sales by U.S.-Owned Affiliates in South Africa (2000): tively efficient at producing. N/A The African Growth and Opportunity Act passed by Congress in 2000 took a big step toward reducing those barriers by allowing a number of products from more than 30 African nations to that access permanent, by extending free trade to enter the United States duty-free. The early results almost all products and services, and by requiring a have been promising. For example, textile and reciprocal commitment to liberalization from our apparel exports to the United States from SACU- African trading partners. member Lesotho tripled from $100 million in 1998 Like the proposed FTA with Morocco, an to $321 million in 2002.44 A U.S.-SACU FTA FTA with the SACU countries would have a mod- would build on the success of AGOA by making est positive impact on the U.S. economy. The real 13 strength of the agreement would be as a foreign Judged by those criteria, the FTAs proposed policy initiative that would encourage economic by the Bush administration deserve to be pur- reform and institutional development in a region of sued. Australia and Singapore both meet the the world where both have been rare and fragile. first criterion. Australia and Singapore are, respectively, major suppliers of agricultural and manufactured goods that, under free trade, Conclusion would provide real competition in our domestic economy and large markets for U.S. exporters. As a tool for expanding freedom and pros- Chile, Morocco, Bahrain, and the five SACU perity, regional and bilateral free-trade agree- nations, while small in market size, all serve to ments are useful if less than ideal. They com- one degree or another as examples of economic plicate the international trading system by and political reform in their regions. A Central deviating from the most-favored-nation prin- American FTA would be a “twofer,” perma- ciple of nondiscrimination, and they can blunt nently opening the overly protected U.S. market the benefits of international trade by diverting to imported apparel and other labor-intensive it from the most efficient foreign producers to manufactured goods, while further institutional- those that are favored but less efficient. But izing the historic market and political reforms If crafted according FTAs can produce compensating benefits by taking root in Central America. to sound principles, opening domestic markets to fresh competi- Despite their peculiarities and incremental free-trade agree- tion, encouraging economic liberalization nature, free-trade agreements can serve the abroad, cementing important foreign policy cause of freedom and development by breaking ments can serve and security ties, integrating regional down barriers to trade between nations. If craft- America’s eco- economies, opening markets to U.S. exports, ed according to sound principles, free-trade and providing healthy institutional competi- agreements can serve America’s economic and nomic and foreign tion for multilateral negotiations. foreign policy interests. policy interests. To maximize the economic benefits of free- trade agreements, the U.S. government should focus its efforts on negotiations with countries Notes that provide new opportunities for U.S. exporters and whose producers would be most likely to 1. White House, Office of the Press Secretary, “President Bush Presses for Peace in the Middle enhance competition in our own market. That East,” Remarks by the president in commence- approach requires that U.S. negotiators not duck ment address at the University of South Carolina, politically sensitive sectors through long phase-in May 9, 2003, periods for or exemptions from liberalization. 2003/05/iraq/20030509-11.html. Instead, they should tout the immediate liberal- 2. For articles and studies on those more general ization of those sectors as offering the best oppor- trade issues, see previous materials published by the Cato Institute’s Center for Trade Policy Studies tunities to reap the benefits of trade. available at. As a broader foreign policy tool, free-trade 3. General Agreement on Tariffs and Trade 1947, agreements should reward and solidify market Part I, Article I, Section 1, and political reform abroad. If FTA partners are glish/docs_e/legal_e/gatt47_01_e.htm. not major export markets or significant produc- 4. Members of a customs union adopt a common ers of goods that compete in our domestic mar- external trade policy with uniform tariffs applying ket, they should be moving decisively toward to imports of all members. Members of a free-trade free markets and representative government. agreement retain independent external trade poli- cies while eliminating barriers among themselves. They should be reform leaders in regions of the world where models of successful reform are 5. General Agreement on Tariffs and Trade 1947, most needed. In this way, free-trade agreements Part III, Article XXIV, Section 4. can serve as carrots to encourage the spread of 6. Ibid., Part III, Article XXIV, Section 5 (a) and (b). political and economic freedom abroad. 7. Ibid., Part III, Article XXIV, Section 8(a)i. 14 8. Ibid., Part III, Article XXIV, Section 5(c).. 9. Ibid., Part III, Article XXIV, Section 10. 23. USTR, “Chile Free Trade Agreement: Final Texts,” 10. World Trade Organization, “Regional Trade Article 10.8 (3). Agreements: Facts and Figures,”- 24. Ibid., Chapter 10, Annex 10-C 1.(e). lish/tratop_e/ region_e/reg fac_e.htm. 25. Robert Krol, “The Case for Open Capital 11. Organization for Economic Cooperation and Markets,” Cato Institute Trade Briefing Paper no. Development, “Regional Trade Agreements and the 11, March 15, 2001. Multilateral Trading System,” November 20, 2002, p. 12. 26. USTR, “Chile Free Trade Agreement,” Article 18.2. 12. Jacob Viner, The Customs Union Issue (New York: 27. Ibid., Article 19.2. Carnegie Endowment for International Peace, 1950). 28. For an analysis of the link between trade and labor 13. Ibid., p. 135. and environmental standards, see Daniel T. Griswold, 14. For a favorable assessment of free-trade agreements, “Trade, Labor, and the Environment: How Blue and see Robert Z. Lawrence, “Emerging Regional Green Sanctions Undermine Higher Standards,” Cato Arrangements: Building Blocks or Stumbling Blocks?” Institute Trade Policy Analysis no. 18, August 2, 2001. in International Political Economy: Perspectives on Global 29. USTR, “Singapore Free Trade Agreement: Final Power and Wealth, ed. Jeffry A. Frieden and David A. Text of Free Trade Agreement,” Annex 2B: General Lake, 3d ed. (New York: Routledge, 1995), pp. 407–15; Notes,. and Lawrence H. Summers, “Regionalism and the World Trading System,” in Trading Blocs: Alternative 30. Ibid., Annex 2C 3. Approaches to Analyzing Preferential Trade Agreements, 31. Ibid., Annex 3B. ed. Jagdish Bhagwati, Pravin Krishna, and Arvind Panagariya (Cambridge, Mass.: MIT Press, 1999), pp. 32. U.S. Department of the Treasury, “Fact Sheet: 561–66. For a negative assessment, see Jagdish Agreement on US-Singapore Free Transfers,” January 15, Bhagwati and Arvind Panagariya, The Economics of 2003, reports/kd37661.doc. Preferential Trade Agreements (Washington: AEI Press, 33. USTR, “Singapore Free Trade Agreement,” Annex 3B. 1996), especially pp. 1–78. 34. Material in boxes is from U.S. Central 15. National Association of Manufacturers, “Absence Intelligence Agency; Gwartney and Lawson; and of Chilean Trade Agreement Costing U.S. over $1 U.S. Department of Commerce. Billion per Year,” Washington, February 4, 2003, 35. Elena L. Nguyen, “The International Investment EOFCHILEANTRADEAGREEMENT.pdf. Position of the United States at Yearend 2001,” Survey of Current Business, July 2002, Table 2.2, p. 33. 16. Material in boxes is from U.S. Central Intelligence Agency, 2003 Factbook; James Gwartney and Robert 36. Maria Borga and Michael Mann, “U.S. Lawson, Economic Freedom of the World: Annual Report International Services: Cross-Border Trade in 2001 2001 (Vancouver, B.C.: Fraser Institute, 2001); and and Sales through Affiliates in 2000,” Survey of U.S. Department of Commerce. Current Business, October 2002, Table 8, p. 120. 17. Gwartney and Lawson, p. 144. 37. U.S. Bureau of the Census, “End-Use: 5-Years of Imports and Exports by 5-Digit End-Use Code,” 18. International Monetary Fund, International Financial Statistics Yearbook 2001 (Washington: product/enduse/index.html, accessed May 20, 2003. IMF, 2001), p. 169. 38. Ibid. 19. Andres Oppenheimer, “Chile Looks to Become the ‘Ireland of the Americas,’ ” Miami Herald, 39. See Griswold, pp. 7–8. December 22, 2002. 40. U.S. Bureau of the Census, “Trade Balance by 20. Office of U.S. Trade Representative (USTR), Country,” 2003 Trade Policy Agenda and 2002 Annual Report of country/index.html, accessed May 20, 2003. the President of the United States on the Trade 41. U.S. Bureau of the Census, “End-Use.” Agreements Program, March 1, 2003, p. 137. 42. Nguyen, Table 2.2, p. 33. 21. USTR, “Chile Free Trade Agreement: Final Texts,” Released April 3, 2003, Section 3, U.S. Headnotes, 43. Borga and Mann; and U.S. Bureau of the. Census, “FT900—U.S. International Trade in Goods and Services,” December 2002, Supplemen- 22. Coalition of Service Industries, “Service tal Exhibit 6, Industries Coalition Calls Chile US Trade PressRelease/2002pr/12/exh6s.pdf. Agreement a ‘Milestone,’ Very Good Deal for Both Countries,” Press release, December 11, 2002, 44. U.S. Bureau of the Census, “End Use.” 15 Board of Advisers CENTER FOR TRADE POLICY STUDIES James K. Glassman American Enterprise Institute T he mission of the Cato Institute’s Center for Trade Policy Studies is to increase public George P. Shultz “Ending the ‘Chicken War’: The Case for Abolishing the 25 Percent Truck Tariff” by Dan Hoover Institution Ikenson, Trade Briefing Paper no. 17 (June 18, 2003) “Grounds for Complaint? Understanding the ‘Coffee Crisis’” by Brink Lindsey, Trade Walter B. Wriston Briefing Paper no. 16 (May 6, 2003) Former Chairman and CEO, Citicorp/Citibank “Whither the WTO? A Progress Report on the Doha Round” by Razeen Sally, Trade Policy Analysis no. 23 (March 3, 2003) Clayton Yeutter Former U.S. Trade Representative “Free Trade, Free Markets: Rating the 107th Congress” by Daniel T. Griswold, Trade Policy Analysis no. 22 (January 30, 2003) “Reforming the Antidumping Agreement: A Road Map for WTO Negotiations” by Brink Lindsey and Dan Ikenson, Trade Policy Analysis no. 21 (December 11, 2002) “Antidumping 101: The Devilish Details of ‘Unfair Trade’ Law” by Brink Lindsey and Dan Ikenson, Trade Policy Analysis no. 20 (November 21, 2002) “Willing Workers: Fixing the Problem of Illegal Mexican Migration to the United States” by Daniel T. Griswold, Trade Policy Analysis no. 19 (October 15, 2002),
https://www.scribd.com/document/21069139/Free-Trade-Agreements-Steppingstones-to-a-More-Open-World-Cato-Trade-Briefing-Paper-No-18
CC-MAIN-2019-09
refinedweb
9,871
54.12
Don Hite has a VBS Script To Change A Remote SMS Clients Cache Size, but I wanted to do it with PowerShell: $Cache = Get-WmiObject -namespace root\ccm\SoftMgmtAgent -class CacheConfig $Cache.size = 10240 $Cache.InUse = “True” $Cache.Put() Restart-Service ccmexec I added line 3 after trying the size change and the bottom of the Advanced tab on the ConfigMgr control panel was greyed out (on a Windows Vista SP1 client). Disclaimer: The information on this site is provided “AS IS” with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use. how to check cache size with a specific machine collection ?
https://blogs.technet.microsoft.com/aaronczechowski/2009/02/12/powershell-to-change-configmgr-client-cache-size/
CC-MAIN-2016-40
refinedweb
124
62.98
Hi all. I was just curious, is there any resource online that contains a complete PyGimp API spec? That is, a resource that shows the syntax, parameters and return values for each of the functions available for interfacing with GIMP through Python? For example, look at the register function. Without knowing exactly what parameters are defined within the function declaration, how is a newcomer to working with PyGimp supposed to know what arguments to pass to it? How are we supposed to debug any errors? If anyone can point me towards a good PyGimp developers resource, that'd be great. Cheers PyGimp documentation You are not the only one, look here to start with. Use the introspection I've been told the best and most up to date documentation is in the code it self, which you can get via python's introspection feature. Open the console and type: help(gimp) -Rob A> Gimp API, using Python help() Clarifying: In GIMP, choose Filters/Python-Fu/Console. Then enter: help(gimp) Or enter: import gimpfu help(gimpfu) Or not so interesting, the other GIMP Python modules (gimpenums and gimpshelf and gimpplugin.) gimpenums: useful GIMP constants gimpshelf: helps a plugin keep persistent data (between GIMP sessions.) gimpplugin: an alternative to gimpfu, if you need low-level, more detailed access for plugin creation for plugins of type EXTENSION or TEMPORARY gimpfu is what most plugin writers will use. It imports gimp and defines an alias "pdb" to gimp.pdb. The help you get from help(gimp) or similar could be better. The help is generated (by a program) from the Python code itself (mostly not edited by a human technical writer.) Some of it comes from the docstrings (comments in certain positions in the Python code) that are supposed to document the code. The docstrings in Python Gimp (gimp, gimpfu, etc.) are sparse or sloppy (for example they include the license, which they should not.) Note you can't usefully import a plugin and call help() on it. For example: import foggify crashes the GIMP Python console. (I proposed a patch to Python GIMP that would prevent this crash, on the gimp-dev mailing list, but received a luke-warm response.) Also, in the GIMP Python console, the PATH environment variable is not set so your personal plugins (the ones you have downloaded or are writing yourself) are found. Try: import sys print sys.path it won't include for example ~/.gimp-2.6/plug-ins (the hidden file in your home directory on some Linux distributions where your personal GIMP plugins are installed.) You can set the path yourself. Gimp API, using Python help() You are right the help you get from help(gimp) or similar could be much better and that makes it very difficult for 'newbees' (like me) to write a plugin. Oke I've partly rewritten the contactsheet plugin with gimpfu. Now I'm rewriting the GUI because it's growing too big, with Glade and gimpplugin. Look at the help(gimpplugin) I can do nothing with that I've spend days to find something usefull, even on this forum I got no response on my topic:. At the end I found something useful on a German forum (schumaml. Lucky I can read German;). At the moment the new design is partly working but.....still a lot of questions left. A few questions have been answered now but were do I found something about a variable containing the userdir or homedir?? The problem you described with sys was indeed one of my problems. So I'm looking for examples especially with "Glade". I have already read a lot of python plugins on this registry and I already found usefull commands I could use.
http://registry.gimp.org/comment/7610
CC-MAIN-2014-52
refinedweb
624
64
The Internet of Things Meets the Internet of Web Apps. Injecting Smarts into the Semantic Web and Web 2.0/3.0. In our continuing series on advanced web technology, we’ve looked at the difference between the Semantic Web and Web 2.0/3.0. We’ve also looked closely at the Semantic Web, and in particular, we’ve discussed what we mean by that word “semantic“. And with respect to Web 2.0/3.0, we’ve considered just what constitutes an advanced web app. And we’ve looked at some specific advanced apps. But one thing has stood out above all else: the new world of web applications depends on our ability to make web apps smarter. At the core of this are a handful of key technological advances: namespaces, XML languages, full text searching, and web services. Still, as we have seen, we can only crudely mimic intelligence, which we do largely by using a complex mixture of standards, heuristics, and pre-made components. Importantly, this issue of being smart is very old, and has been a far off goal of the folks who build software development tools since the very early days of computing. In truth, some of the things that seem new and exciting to us have actually been around for a long time, and have existed under multiple names. But this base of intelligence-injecting technology, could it be used to give the Semantic Web and Web 2.0/3.0 a shot in the arm? Can we leverage the greater world of smart technology to make the new web even more powerful? Let’s focus on just one technology that has been around a while, but is still vibrant and rapidly growing. The Internet of Things. This idea is centered around the idea that the objects in our world would serve us a lot better if computers could coordinate their use., when then respond. As things move around, as things are used in concert to perform tasks, they can be carefully tracked and managed. Other technologies for tracking objects can be employed, too, and RFID is just one example of something that is fairly cheap and very dependable. It’s also true that objects can respond with more than a “Yo, I’m here.” In particular, they are likely to tell us exactly where they are, and whether they are in use. But for the most part, these things tend to be fairly inert when it comes to intelligence. They might be warehouse items or objects in retail stores. Volume is a key factor. RFID tags are cheap enough that an organization can tag tens of thousands or hundreds of thousands of items. Immobile Things, but Mobile Users. We can use the Internet of things concept in another mode. The objects might be immobile, but the users might be highly mobile, and they might be carrying the tags. The objects might have computing capabilities in them, as well. If I work in a secure facility, and if I use a variety of computing devices in the course of the workday, I can be carefully tracked. And every machine could be engineered to allow me to perform only those functions for which I have been authorized. The computers could also track suspicious trends that involve multiple machines and multiple users over a period of time. The Internet of Things and the Internet of Web Apps. What does this all have to do with the Internet we are concerned with in this blog, the one that hosts next generation web apps? The two worlds could be blended together. Consider this. When we buy things on the web, we normally use one of two retail models. If the object is software or data or in any downloadable electronic form, the website can ensure that by the end of the shopping session, our credit card has been paid and we have received the goods. This makes both the seller and the user happy. Or, if the object is physical, like a printed book, the website will ensure that by the end of the session, our credit card has been charged, and we have been given a shipping number, a shipping date, or some other piece of information that gives us some assurance that we will get what we paid for. In this mode, the seller is likely to be quite happy, and the buyer might not be quite so happy. But there’s another way. At the end of retail session, the buyer of a physical product could be given the ID of the particular object being purchased, and then, via the retail website, track that object nonstop from the moment the session ends until the moment it arrives. The buyer could even track the construction of a purchased object out of many subcomponents. The Bigger Picture. Here’s something to think about, something else that can be used in concert with the advanced web technology and the Internet of things concept. It’s called “ubiquitous computing”, and it is a concept that has been around for many years. It refers to the expansion of computing technology into every aspect of our lives. Putting all of this technology together means that the new web is working its way into law enforcement, supply chains, manufacturing processes, retail shopping, education, etc., etc., etc. This will have a huge impact over the next decade.  Comment on this Post
http://itknowledgeexchange.techtarget.com/semantic-web/the-internet-of-things-meets-the-internet-of-web-apps/
CC-MAIN-2014-52
refinedweb
909
71.14
Testing an application is indispensable if you plan to deliver a robust and reliable product to your customers. This article will take a closer look at beta testing iOS applications with TestFlight, a free web service that makes it easy to distribute Ad Hoc builds and monitor beta testing usage. Introduction Testing software isn't limited to beta testing. There are several techniques to test software, such as unit testing, integration testing, stress testing, etc. Each of these methodologies has its benefit and place in the development cycle. Beta testing is a process in which a pre-release or beta version is distributed to a limited audience that is not part of the development team. In the early days of iOS development, it wasn't trivial to distribute test builds, gather feedback, and collect crash reports. However, in recent years a handful of services have emerged that make beta testing not only easier, but trivial. In this article, I will discuss one such service: TestFlight. TestFlight allows developers to distribute test builds over-the-air. Fiddling with .ipa files and provisioning profiles is a thing of the past if you decide to team up with TestFlight. The ease of Ad Hoc distribution is not the only advantage of TestFlight. TestFlight also offers an SDK that gives you a number of great features with surprisingly little overhead. With the SDK installed and configured, crash reports are sent to TestFlight and automatically symbolicated. Another great feature of the TestFlight SDK is checkpoints. You can set checkpoints at specific locations in your application to see if a particular feature is actually used. Checkpoints tie in neatly with sessions. Whenever a user opens the application, a session is automatically started. The TestFlight dashboard shows how long a session lasts and which checkpoints the tester passed during a session. The list of features doesn't stop there. A few other useful features include: in-app feedback, in-app updates, and remote logging. Step 1: Getting Started To follow the steps in this tutorial, you will need a TestFlight account. Head over to the TestFlight website and sign up for a free account. After signing in to your TestFlight account for the first time, you are asked to create a team. What are teams? A team is just a categorization for grouping builds and testers. Chances are that you work on several applications for different clients or projects. A team allows you to group the builds and testers for each application or client easily. In other words, it is a convenient way to keep the builds and testers of different projects separated and organized. The next step is to upload a test build to TestFlight. However, before we do that, we need to create an application that is properly set up and configured for TestFlight. This includes integrating the TestFlight SDK to take advantage of the features I described earlier. Of course, TestFlight really shines if you have a group of committed testers (preferably people that are not part of the development team). Adding testers to TestFlight is as simple as sending an invitation. With TestFlight, it is no longer cumbersome to obtain a device's UDID as you will see a bit later. I explain how to invite beta testers a bit later in this tutorial. Step 2: Create a New Project The application that we are about to build will be simple. The primary goal of this tutorial is show you how to get up to speed with TestFlight and not so much building a feature rich application. The features of the application are simple, (1) implement TestFlight checkpoints, (2) ask the user for feedback while using the application, and (3) crashing the application and let TestFlight collect the crash report. Create a new project in Xcode by selecting the Single View Application template from the list of templates (figure 3). Name your application Take-Off, enter a company identifier, set iPhone for the device family, and check Use Automatic Reference Counting. Make sure to uncheck the remaining checkboxes for this project. Tell Xcode where you want to save your project and hit the Create button (figure 4). Step 3: Add the TestFlight SDK Start by downloading the latest stable release of the TestFlight SDK (1.1 at the time of writing). Extract the archive and add libTestFlight.a and TestFlight.h, located in the TestFlightx.x folder, to your project. Make sure to copy both files to your project by checking the checkbox labeled Copy items into destination group's folder (if needed) and don't forget to add both files to the Take-Off target (figure 5). To keep everything organized, place libTestFlight.a and TestFlight.h in a separate group named TestFlight. A few more steps are necessary to finalize the integration with TestFlight. Select your project in the Project Navigator and click the Take-Off target in the list of targets. Select the Build Phases tab at the top and open the Link Binary With Libraries drawer. If all went well, libTestFlight.a should be present in the list of libraries. Drag libTestFlight.a into the list of linked libraries if it isn't present in the list (figure 6). TestFlight also makes use of the libz library to do some of its work so we need to link the project against this library as well. Click the plus button at the bottom of the list of libraries, search for libz.dylib, and add it to the list of linked libraries. The next step is optional, but recommended if you plan to use TestFlight throughout your application. Instead of importing TestFlight.h in every file that makes use of the TestFlight SDK, it is more convenient to add it to the project's Prefix.pch file. Take a look at the complete Prefix.pch file below for clarification. // // Prefix header for all source files of the 'Take-Off' target in the 'Take-Off' project // #import <Availability.h> #ifndef __IPHONE_4_0 #warning "This project uses features only available in iOS SDK 4.0 and later." #endif #ifdef __OBJC__ #import <UIKit/UIKit.h> #import <Foundation/Foundation.h> #import "TestFlight.h" #endif TestFlight installs an uncaught exception handler to report crashes and collect crash reports. If you want to make use of this feature then it is recommended to slightly modify the build settings of your project. Select your project from the Project Navigator and choose the Take-Off target from the list of targets. Select the Build Settings tab and scroll to the Deployment settings (figure 8). Three deployment settings need to be set to NO. - Deployment Postprocessing - Strip Debug Symbols During Copy - Strip Linked Product Settings in bold indicate that the default value is overridden. You can revert any changes you made by selecting a bold setting and hitting backspace on your keyboard. Make sure that the effective value of the build setting is set to Combined (figure 9). Step 4: Setup TestFlight TestFlight doesn't do anything in your application just yet. To make use of its features, we need to initialize TestFlight when the application launches. The ideal place for setting up TestFlight is in the application delegate's application:didFinishLaunchingWithOptions: method. Setting up TestFlight is surprisingly easy. All we need to do is call takeOff: on the TestFlight class and pass the team token of the team we set up earlier in this tutorial. To find your team token, head over to TestFlight's Dashboard, select the correct team from the dropdown menu at the top right, and choose Edit Info from that same menu. Copy the team token and pass it as the parameter in the takeOff: method. - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Initialize View Controller self.viewController = [[MTViewController alloc] initWithNibName:@"MTViewController" bundle:nil]; // Initialize Window self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; [self.window setRootViewController:self.viewController]; [self.window makeKeyAndVisible]; // Initialize TestFlight [TestFlight takeOff:@"TEAM_TOKEN_GOES_HERE"]; return YES; } Step 5: Upload a Build TestFlight is now set up, but we still need to upload a build to TestFlight. The following steps are no different than the steps you would normally take to prepare a test or ad hoc build for an application. I have listed the necessary steps below. - Create Distribution Certificate in iOS Provisioning Portal - Create App ID in iOS Provisioning Portal - Add Devices in iOS Provisioning Portal - Create Provisioning Profile in iOS Provisioning Portal - Create Ad Hoc Configuration in Xcode - Create Ad Hoc Scheme in Xcode - Archive Build for Ad Hoc Distribution in Xcode - Upload Ad Hoc Build to TestFlight - Distribute Ad Hoc Build to Testers in TestFlight This might seem like a lot of work, but most of these steps only need to be done once for a new application. In addition, much of this can be automated. In addition to TestFlight's SDK, TestFlight also has an upload API that allows developers to automatically upload builds to TestFlight. I won't cover the upload API in this tutorial as this is a more advanced topic. Since you are reading this tutorial, I will assume that you are already familiar with beta testing and the steps involved to prepare a test build for ad hoc distribution. In this article, I will limit myself to the steps that involve TestFlight. When distributing builds using TestFlight, it is important to properly version your builds. By doing so, keeping track of different test builds will become much easier. Before uploading your application, make sure to set the version number of your application to 0.1.0 to indicate that this is a a pre-release version. Take a look at this question on Stack Overflow for more information about version numbers. To manually upload a build to TestFlight, click the second button from the right at the top of the TestFlight Dashboard. Adding a new build is as easy as dragging the .ipa file into the appropriate field, adding a short description, also known as release notes, and clicking the Upload button. Release notes are much more useful than most people think. Release notes should contain information about the changes made to the test build, but they should also contain known bugs (if necessary) and potential workarounds. After uploading a build of your application, you are taken to the Permissions view of your new test build. The permissions of a build determine who has access to the new test build, that is, who can install the test build on their device(s). For example, if you want to test a critical build only internally and prevent external testers from accessing the build then you can restrict the permissions of that build to only include members of your development team. To make the distribution of test builds easier, TestFlight has a feature appropriately named distribution lists. A distribution list is a list or group of people within a TestFlight team. Instead of manually selecting members of a TestFlight team every time you upload a new build, you tell TestFlight which distribution lists have access to the new build. Step 6: Crash Reports, Checkpoints, and Feedback One of the best features of TestFlight is the ability to collect and automatically symbolicate crash reports. Implementing checkpoints and asking for user feedback is easy as well. Let's modify the project to see how all this works. Open your Xcode project and head over to the view controller's implementation file (MTViewController.m). In the viewDidLoad method, create three buttons as shown below. The code shouldn't be difficult to grasp. - (void)viewDidLoad { [super viewDidLoad]; // Create Crash Button UIButton *crashButton = [[UIButton alloc] initWithFrame:CGRectMake(20.0, 20.0, 280.0, 44.0)]; [crashButton setTitle:@"Crash" forState:UIControlStateNormal]; [crashButton setBackgroundColor:[UIColor blueColor]]; [crashButton addTarget:self action:@selector(crash:) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:crashButton]; // Create Checkpoint Button UIButton *checkpointButton = [[UIButton alloc] initWithFrame:CGRectMake(20.0, 72.0, 280.0, 44.0)]; [checkpointButton setTitle:@"Checkpoint" forState:UIControlStateNormal]; [checkpointButton setBackgroundColor:[UIColor blueColor]]; [checkpointButton addTarget:self action:@selector(checkpoint:) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:checkpointButton]; // Create Feedback Button UIButton *feedbackButton = [[UIButton alloc] initWithFrame:CGRectMake(20.0, 124.0, 280.0, 44.0)]; [feedbackButton setTitle:@"Feedback" forState:UIControlStateNormal]; [feedbackButton setBackgroundColor:[UIColor blueColor]]; [feedbackButton addTarget:self action:@selector(feedback:) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:feedbackButton]; } The idea is simple. The application should crash when the user taps the first button. Crashing an application is easy. Right? Take a look at the implementation of the crash: method to see how it is implemented. We create an array with one element and then ask for the second object in the array. This throws an NSRangeException since there is only one element in the array. - (void)crash:(id)sender { NSArray *array = @[@"one"]; NSLog(@"%@", [array objectAtIndex:1]); } The implementation of the checkpoint: method is surprisingly easy thanks to the TestFlight SDK. As I mentioned earlier, checkpoints are a means to track if certain features of your application are used by your testers. In other words, checkpoints tell you when a user has done something that is of interest to you. As I said, checkpoints tell you (among other things) which features are used and, even more important, which features are not. Some features are difficult to find even though this might not be obvious to the developer. - (void)checkpoint:(id)sender { [TestFlight passCheckpoint:@"User did click checkpoint button."]; } There are various ways to collect feedback from your testers. The easiest way to collect feedback, however, is to use TestFlight's feedback integration. Take a look at the implementation of the feedback: method to see how this works. When the user taps the feedback button, a modal view shows up and lets the user enter feedback (figure 13). - (void)feedback:(id)sender { [TestFlight openFeedbackView]; } After adding these changes to your application, update the version number of your application to 0.2.0 and archive the project. It is good practice to always clean your project before you prepare a build for distribution, for the App Store as well as for ad hoc distribution. Upload the new .ipa file to TestFlight, set the appropriate permissions, and update the installation on your device with the new build by visiting the TestFlight dashboard on your device. If you followed the steps, you should see the three buttons and tapping each button will trigger the functionality in the application. TestFlight sends information to the TestFlight servers whenever it can, that is, if a network connection is available and the operating system doesn't kill the application before it is finished sending the data to the TestFlight servers. This means that TestFlight is a great tool to collect live data from your testers. You can try this yourself by tapping the three buttons in your application and taking a look at the TestFlight dashboard a few minutes later. TestFlight shows you which testers installed the update and on what device. It shows you the number of sessions, which checkpoints they passed, and how many crashes occurred. As I mentioned earlier, the crash reports are automatically symbolicated, which is one of the features I love most. It is also possible to explore individual sessions by clicking the sessions tab on the left (figure 14), selecting a user from the list, and clicking on one of the sessions. This gives you a detailed outline of the session of the respective user (figure 15). Step 7: Adding Additional Testers Beta testing is only useful if you can rely on a group of committed testers who truly want to put your application through its paces. Adding testers to TestFlight can be done in two ways. (1) Open the TestFlight dashboard of the team to which you'd like to add a new tester. Click the button with the tiny plus sign in the top right corner and fill out the form. It is recommended that the user clicks the accept link on the test device. Even though this isn't strictly necessary, it makes the process much easier, because the device the user is using will be automatically added to their account as a test device. (2) A second option to add testers, is to use a recruitment URL. This is a form that allows anyone to sign up as a tester. This makes it easier if you have a fairly large group of testers you'd like to add to TestFlight. More? A few months ago, TestFlight was acquired by Burstly and this has resulted in the creation of TestFlight Live. TestFlight Live is another addition to the TestFlight platform and it gives developers the means to not only use TestFlight for development and testing, but also when the application is live in the App Store. You can read more about it on TestFlight's blog. Conclusion Even though the idea behind TestFlight is simple, over-the-air distribution of beta versions, TestFlight has a lot more to offer. After having used TestFlight for a few weeks, you will notice that the team behind TestFlight has done a great job in terms of which features to include and how all the different pieces fit together. There are many more features that I did not discuss in this article so I encourage you to visit TestFlight's website and browse the outstanding documentation. TestFlight is still growing rapidly and I am curious to see how it continues to evolve and improve.
http://code.tutsplus.com/tutorials/testing-with-testflight--mobile-13787
CC-MAIN-2014-10
refinedweb
2,892
55.34
Using TypeScript with React Native · React Native JavaScript! We all love it. But some of us also love types. Luckily, options exist to add stronger types to JavaScript.. We'll be using Microsoft's TypeScript-React-Native-Starter repo as a guide. Prerequisites Because you might be developing on one of several different platforms, targeting several different types of devices, basic setup can be involved. You should first ensure that you can run a plain React Native app without TypeScript. Follow the instructions on the React Native website to get started. When you've managed to deploy to a device or emulator, you'll be ready to start a TypeScript React Native app. You will also need Node.js, npm, and Yarn. Initializing Once you've tried scaffolding out an ordinary React Native project, you'll be ready to start adding TypeScript. Let's go ahead and do that. react-native init MyAwesomeProject cd yarn add --dev react-native-typescript-transformer yarn tsc --init --pretty --jsx react touch rn-cli.config.js yarn add --dev @types/react @types/react-native The tsconfig.json file contains all the settings for the TypeScript compiler. The defaults created by the command above are mostly fine, but open the file and uncomment the following line: { /* Search the config file for the following line and uncomment it. */ // "allowSyntheticDefaultImports": true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */ } The rn-cli.config.js contains the settings for the React Native TypeScript Transformer. Open it and add the following: module.exports = { getTransformModulePath() { return require.resolve('react-native-typescript-transformer'); }, getSourceExts() { return ['ts', 'tsx']; }, }; Migrating to TypeScript Rename the generated App.js and __tests_/App.js files to App.tsx. index.js needs to use the .js extension. All new files should use the .tsx extension (or .ts if the file doesn't contain any JSX). If you tried to run the app now, you'd get an error like object prototype may only be an object or null. This is caused by a failure to import the default export from React as well as a named export on the same line. Open App.tsx and modify the import at the top of the file: -import React, { Component } from 'react'; +import React from 'react' +import { Component } from 'react'; Some of this has to do with differences in how Babel and TypeScript interoperate with CommonJS modules. In the future, the two will stabilize on the same behaviour. At this point, you should be able to run the React Native app. Adding TypeScript Testing Infrastructure React Native ships with Jest, so for testing a React Native app with TypeScript, we'll want to add ts-jest to our devDependencies. yarn add --dev ts-jest Then, we'll open up our package.json and replace the jest field with the following: { "jest": { "preset": "react-native", "moduleFileExtensions": [ "ts", "tsx", "js" ], "transform": { "^.+\\.(js)$": "<rootDir>/node_modules/babel-jest", "\\.(ts|tsx)$": "<rootDir>/node_modules/ts-jest/preprocessor.js" }, "testRegex": "(/__tests__/.*|\\.(test|spec))\\.(ts|tsx|js)$", "testPathIgnorePatterns": [ "\\.snap$", "<rootDir>/node_modules/" ], "cacheDirectory": ".jest/cache" } } This will configure Jest to run .ts and .tsx files with ts-jest. Installing Dependency Type Declarations To get the best experience in TypeScript, we want the type-checker to understand the shape and API of our dependencies. Some libraries will publish their packages with .d.ts files (type declaration/type definition files), which can describe the shape of the underlying JavaScript. For other libraries, we'll need to explicitly install the appropriate package in the @types/ npm scope. For example, here we'll need types for Jest, React, and React Native, and React Test Renderer. yarn add --dev/jest /react /react-native /react-test-renderer We saved these declaration file packages to our dev dependencies because this is a React Native app that only uses these dependencies during development and not during runtime. If we were publishing a library to NPM, we might have to add some of these type dependencies as regular dependencies. You can read more here about getting .d.ts files. Ignoring More Files." Adding a Component Let's add a component to our app. Let's go ahead and create a Hello.tsx component. It's a pedagogical component, not something that you'd actually write in an app, but something nontrivial that shows off how to use TypeScript in React Native. Create a components directory and add the following example. // components/Hello.tsx import React from 'react'; import {Button, StyleSheet, Text, View} from 'react-native'; export interface Props { name: string; enthusiasmLevel?: number; } interface State { enthusiasmLevel: number; } export class Hello extends React.Component<Props, State> { constructor(props: Props) { super(props); if ((props.enthusiasmLevel || 0) <= 0) { throw new Error('You could be a little more enthusiastic. :D'); } this.state = { enthusiasmLevel: props.enthusiasmLevel || 1, }; } onIncrement = () => this.setState({enthusiasmLevel: this.state.enthusiasmLevel + 1}); onDecrement = () => this.setState({enthusiasmLevel: this.state.enthusiasmLevel - 1}); getExclamationMarks = (numChars: number) => Array(numChars + 1).join('!'); render() { return ( <View style={styles.root}> <Text style={styles.greeting}> Hello{' '} {this.props.name + this.getExclamationMarks(this.state.enthusiasmLevel)} </Text> <View style={styles.buttons}> <View style={styles.button}> <Button title="-" onPress={this.onDecrement} </View> <View style={styles.button}> <Button title="+" onPress={this.onIncrement} </View> </View> </View> ); } } // styles const styles = StyleSheet.create({ root: { alignItems: 'center', alignSelf: 'center', }, buttons: { flexDirection: 'row', minHeight: 70, alignItems: 'stretch', alignSelf: 'center', borderWidth: 5, }, button: { flex: 1, paddingVertical: 0, }, greeting: { color: '#999', fontWeight: 'bold', }, }); Whoa! That's a lot, but let's break it down: - Test Now that we've got a component, let's try testing it. We already have Jest installed as a test runner. We're going to write snapshot tests for our components, let's add the required add-on for snapshot tests: yarn add --dev react-addons-test-utils Now let's create a __tests__ folder in the components directory and add a test for Hello.tsx: // components/__tests__/Hello.tsx import React from 'react'; import renderer from 'react-test-renderer'; import {Hello} from '../Hello'; it('renders correctly with defaults', () => { const button = renderer .create(<Hello name="World" enthusiasmLevel={1} />) .toJSON(); expect(button).toMatchSnapshot(); }); The first time the test is run, it will create a snapshot of the rendered component and store it in the components/__tests__/__snapshots__/Hello.tsx.snap file. When you modify your component, you'll need to update the snapshots and review the update for inadvertent changes. You can read more about testing React Native components here. Next Steps Check out the official React tutorial and state-management library Redux. These resources can be helpful when writing React Native apps. Additionally, you may want to look at ReactXP, a component library written entirely in TypeScript that supports both React on the web as well as React Native. Have fun in a more type-safe React Native development environment!
http://brianyang.com/using-typescript-with-react-native-react-native/
CC-MAIN-2018-22
refinedweb
1,134
51.24
gnutls_server_name_set - set a name indicator to be sent as an extension #include <gnutls/gnutls.h> int gnutls_server_name_set(gnutls_session_t session, gnutls_server_name_type_t type, const void * name, size_t name_length); gnutls_session_t session is a gnutls_session_t structure. gnutls_server_name_type_t type specifies the indicator type const void * name is a string that contains the server name. size_t name_length holds the length of name This function is to be used by clients that want to inform (via a TLS extension mechanism) the server of the name they connected to. This should be used by clients that connect to servers that do virtual hosting. The value of name depends on the ind type. In case of GNUTLS_NAME_DNS, an ASCII or UTF-8 null terminated string, without the trailing dot, is expected. IPv4 or IPv6 addresses are not permitted..
http://huge-man-linux.net/man3/gnutls_server_name_set.html
CC-MAIN-2017-30
refinedweb
129
64.1
Hello readers. Welcome to our tutorial on debugging and Visualisation in PyTorch. This is, for at least now, is the last part of our PyTorch series start from basic understanding of graphs, all the way to this tutorial. In this tutorial we will cover PyTorch hooks and how to use them to debug our backward pass, visualise activations and modify gradients. Before we begin, let me remind you this Part. Understanding PyTorch Hooks Hooks in PyTorch are severely under documented for the functionality they bring to the table. Consider them like the the Doctor Fate of the superheroes. Haven't heard of him? Exactly. That's the point. One of the reason I like hooks so much is that they provide you to do things during backpropagation. A hook is like a one of those devices that many heroes leave behind in the villain's den to get all the information. You can register a hook on a Tensor or a nn.Module. A hook is basically a function that is executed when the either forward or backward is called. When I say forward, I don't mean the forward of a nn.Module . forward function here means the forward function of the torch.Autograd.Function object that is the grad_fn of a Tensor. Last line seem gibberish to you? I recommend you to please checkout our article on computation graph in PyTorch. If you are just being lazy, then understand every tensor has a grad_fn which is the torch.Autograd.Function object which created the tensor. For example, if a tensor is created by tens = tens1 + tens2, it's grad_fn is AddBackward. Still doesn't make sense? You should definitely go back and read this article. Notice, that a nn.Module like a nn.Linear has multiple forward invocations. It's output is created by two operations, (Y = W * X + B), addition and multiplication and thus there will be two forward calls. This can mess things up, and can lead to multiple outputs. We will touch this in more detail later in this article. PyTorch provides two types of hooks. - The Forward Hook - The Backward Hook A forward hook is executed during the forward pass, while the backward hook is , well, you guessed it, executed when the backward function is called. Time to remind you again, these are the forward and backward functions of an Autograd.Function object. Hooks for Tensors A hook is basically a function, with a very specific signature. When we say a hook is executed, in reality, we are talking about this function being executed. For tensors, the signature for backward hook is, hook(grad) -> Tensor or None There is no forward hook for a tensor. grad is basically the value contained in the grad attribute of the tensor after backward is called. The function is not supposed modify it's argument. It must either return None or a Tensor which will be used in place of grad for further gradient computation. We provide an example below. import torch a = torch.ones(5) a.requires_grad = True b = 2*a b.retain_grad() # Since b is non-leaf and it's grad will be destroyed otherwise. c = b.mean() c.backward() print(a.grad, b.grad) # Redo the experiment but with a hook that multiplies b's grad by 2. a = torch.ones(5) a.requires_grad = True b = 2*a b.retain_grad() b.register_hook(lambda x: print(x)) b.mean().backward() print(a.grad, b.grad) There are several uses of functionality as above. - You can print the value of gradient for debugging. You can also log them. This is especially useful with non-leaf variables whose gradients are freed up unless you call retain_gradupon them. Doing the latter can lead to increased memory retention. Hooks provide much cleaner way to aggregate these values. - You can modify gradients during the backward pass. This is very important. While you can still access the the gradvariable of a tensor in a network, you can only access it after the entire backward pass has been done. For example, let us consider what we did above. We multiplied b's gradient by 2, and now the subsequent gradient calculations, like those of a(or any tensor that will depend upon bfor gradient) use the 2 * grad(b) instead of grad(b). In contrast, had we individually updated the parameters after the backward, we'd have to multiply b.gradas well as a.grad(or infact, all tensors that depend on bfor gradient) by 2. a = torch.ones(5) a.requires_grad = True b = 2*a b.retain_grad() b.mean().backward() print(a.grad, b.grad) b.grad *= 2 print(a.grad, b.grad) # a's gradient needs to updated manually Hooks for nn.Module objects For nn.Module object, the signature for the hook function, hook(module, grad_input, grad_output) -> Tensor or None for the backward hook, and hook(module, input, output) -> None for the forward hook. Before we begin, let me make it clear that I'm not a fan of using hooks on nn.Module objects. First, because they force us to break abstraction. A nn.Module is supposed to be a modularised object representing a layer. However, a hook is subjected a forward and a backward, of which there can be an arbitrary number in a nn.Module object. This requires me to know the internal structure of the modularised object. For example, a nn.Linear involves two forward calls during it's execution. Multiplication and Addition ( y = w * x + b). This is why the input to the hook function can be a tuple containing the inputs to two different forward calls and output s the output of the forward call. grad_input is the gradient of the input of nn.Module object w.r.t to the loss ( dL / dx, dL / dw, dL / b). grad_output is the gradient of the output of the nn.Module object w.r.t to the gradient. These can be pretty ambiguous for the reason of multiple calls inside a nn.Module object. Consider the following)) return self.fc1(self.flatten(x)) net = myNet() def hook_fn(m, i, o): print(m) print("------------Input Grad------------") for grad in i: try: print(grad.shape) except AttributeError: print ("None found for Gradient") print("------------Output Grad------------") for grad in o: try: print(grad.shape) except AttributeError: print ("None found for Gradient") print("\n") net.conv.register_backward_hook(hook_fn) net.fc1.register_backward_hook(hook_fn) inp = torch.randn(1,3,8,8) out = net(inp) (1 - out.mean()).backward() The output produced is. Linear(in_features=160, out_features=5, bias=True) ------------Input Grad------------ torch.Size([5]) torch.Size([5]) ------------Output Grad------------ torch.Size([5]) Conv2d(3, 10, kernel_size=(2, 2), stride=(2, 2)) ------------Input Grad------------ None found for Gradient torch.Size([10, 3, 2, 2]) torch.Size([10]) ------------Output Grad------------ torch.Size([1, 10, 4, 4]) In the code above, I use a hook to print the shapes of grad_input and grad_output. Now my knowledge about this may be limited, and please do comment if you have a alternative, but for the love of pink floyd, I cannot figure out what grad_input is supposed to represent what? In conv2d you can guess by shape. The grad_input of size [10, 3, 3, 2] is the grad of weights. That of [10] is maybe bias. But what about grad of input feature maps. None? Add to that Conv2d uses im2col or it's cousin to flatten an image such that convolutional over the whole image can be done through matrix computation and not looping. Were there any backward calls there. So in order to get the gradient of x, I'll have to call the grad_output of layer just behind it? The linear is baffling. Both the grad_inputs are size [5] but shouldn't the weight matrix of the linear layer be 160 x 5. For such confusion I'm not a fan of using hooks with nn.Modules. You could do it for simple things like ReLU, but for complicated things? Not my cup of tea. Proper Way of Using Hooks : An Opinion So, I'm all up for using hooks on Tensors. Using named_parameters functions, I've been successfully been able to accomplish all my gradient modifying / clipping needs using PyTorch. named_parameters allows us much much more control over which gradients to tinker with. Let's just say, I wanna do two things. - Turn gradients of linear biases into zero while backpropagating. - Make sure that for no gradient going to conv layer is less than 0.)) x.register_hook(lambda grad : torch.clamp(grad, min = 0)) #No gradient shall be backpropagated #conv outside less than 0 # print whether there is any negative grad x.register_hook(lambda grad: print("Gradients less than zero:", bool((grad < 0).any()))) return self.fc1(self.flatten(x)) net = myNet() for name, param in net.named_parameters(): # if the param is from a linear and is a bias if "fc" in name and "bias" in name: param.register_hook(lambda grad: torch.zeros(grad.shape)) out = net(torch.randn(1,3,8,8)) (1 - out).mean().backward() print("The biases are", net.fc1.bias.grad) #bias grads are zero The output produced is: Gradients less than zero: False The biases are tensor([0., 0., 0., 0., 0.]) The Forward Hook for Visualising Activations If you noticed, the Tensor doesn't have a forward hook, while nn.Module has one, which is executed when a forward is called. Notwithstanding the issues I already highlighted with attaching hooks to PyTorch, I've seen many people use forward hooks to save intermediate feature maps by saving the feature maps to a python variable external to the hook function. Something like this. visualisation = {} inp = torch.randn(1,3,8,8) def hook_fn(m, i, o): visualisation[m] = o net = myNet() for name, layer in net._modules.items(): layer.register_forward_hook(hook_fn) out = net(inp) Generally, the output for a nn.Module is the output of the last forward. However, the above functionality can be safely replicated by without use of hooks. Just simply append the intermediate outputs in the forward function of nn.Module object to a list. However, it might be a bit problematic to print the intermediate activation of modules inside nn.Sequential. To get past this, we need to register a hook to children modules of the Sequential but not the to Sequential) self.seq = nn.Sequential(nn.Linear(5,3), nn.Linear(3,2)) def forward(self, x): x = self.relu(self.conv(x)) x = self.fc1(self.flatten(x)) x = self.seq(x) net = myNet() visualisation = {} def hook_fn(m, i, o): visualisation[m] = o def get_all_layers(net): for name, layer in net._modules.items(): #If it is a sequential, don't register a hook on it # but recursively register hook on all it's module children if isinstance(layer, nn.Sequential): get_all_layers(layer) else: # it's a non sequential. Register a hook layer.register_forward_hook(hook_fn) get_all_layers(net) out = net(torch.randn(1,3,8,8)) # Just to check whether we got all layers visualisation.keys() #output includes sequential layers Finally, you can turn this tensors into numpy arrays and plot activations. Conclusion That wraps up our discussion on PyTorch, an unreasonable effective tool in visualising and debugging the back pass. Hope this article would help you in solving your bugs much quicker. Add speed and simplicity to your Machine Learning workflow today
https://blog.paperspace.com/pytorch-hooks-gradient-clipping-debugging/
CC-MAIN-2022-27
refinedweb
1,899
60.01
This directory contains source code and Rust bindings for Google's BoringSSL library. BoringSSL is vendored here, so each version of Mundane will depend on a particular version of BoringSSL. Each new release will usually vendor the latest version of BoringSSL in order to pick up bug fixes and improvements. Rust bindings live in boringssl.rs. This file is included from the main Mundane source code using a #[path] attribute. These bindings are auto-generated using the bindgen.sh script, although some manual intervention is required. In particular, after running bindgen.sh, each public function must be annotated with a #[link_name] attribute (the reason for these attributes is explained in the following section). For example, given the following bindgen output: extern "C" { pub fn CBS_init(cbs: *mut CBS, data: *const u8, len: usize); } We add a #[link_name] attribute as follows, where X.Y.Z is the current crate version. extern "C" { #[link_name = "__RUST_MUNDANE_X_Y_Z_CBS_init"] pub fn CBS_init(cbs: *mut CBS, data: *const u8, len: usize); } Normally, the C build system does not allow multiple copies of the same codebase to be linked together since the namespace for C symbols is global at link time. In order to avoid this problem, we compile BoringSSL with a custom symbol prefix specific to the crate version. This document describes the details of how this works. Each BoringSSL symbol is given a prefix of __RUST_MUNDANE_X_Y_Z_, where the current crate version number is X.Y.Z. This way, if two different versions of the crate are present during a build, no C symbol will be defined under the same name in both builds of BoringSSL. BoringSSL's build system has built-in support for symbol prefixing. However, it requires that the caller provide a list of symbols which need to be prefixed. Since the set of symbols present is highly platform-dependent, a static list would be very brittle and error-prone. Instead, we discover the symbols dynamically at build time by doing a two-phase build. In the first phase, we build BoringSSL as normal, with no symbol prefixing. Then, using a Go program provided by BoringSSL, we scrape the list of symbols from the build artifacts. Using this list, we run the build again - the second phase - this time using BoringSSL's symbol prefixing feature. We use the artifacts from the second build when performing the final Rust build. We instruct Rust to use the appropriate build artifacts using the linker path. The linker path is used in a manner similar to the binary $PATH in Unix systems. When a library is requested, the linker searches for a build artifact of the appropriate name, stopping its search once it has found the appropriate library. For example, given the argument -l foo, the linker would search for a file called libfoo.a. In order to ensure that the linker is able to find all copies of the BoringSSL build artifacts, we give them unique names. If we didn't, only the first artifact found in the filesystem would be used. Currently, we only link against the crypto library, which, in the normal build system, is stored in libcrypto.a. In order to make sure that all versions of this library are found by the linker - one per version of the crate - we rename them just as we rename symbols. For crate version x.y.z, we rename libcrypto.a to libcrypto_x_y_z.a, and instruct the linker to look for the crypto_x_y_z library. In order to test that symbol prefixing is working properly, use the test_symbol_conflicts.sh script in this directory.
https://fuchsia.googlesource.com/mundane/+/a3cedbe49c2875d2d2b060454e61f42ce8b24283/boringssl/
CC-MAIN-2021-10
refinedweb
595
64.71
Building Games in Flutter with Flame: Getting Started Learn how to build a beautiful game in Flutter with Flame. In this tutorial, you’ll build a virtual world with a movable and animated character. Version - Dart 2.13, Flutter 2.2, VS Code Flutter is changing the world by bringing fast, natively compiled software to the masses. This allows indie developers to release applications for every platform in the same time it would usually have taken a software company. It’s only natural that game developers want to take advantage of that, too. Traditionally, a mobile game developer would need to choose between native performance but slow development time or building with a multi-platform tool like Unity but risk slow loading times and large app sizes. If only there were a way to develop beautiful native games without all the bloat. That’s where Flame comes in. Today, you’ll build a virtual world using Flutter and the Flame engine. You’ll learn how to: - Use Flame version 1.0 to make a game for the web, Android and iOS. - Use a game loop. - Create a movable player character. - Animate your character with sprite sheets. - Add box collision by reading from a tile map. Getting Started You’ll develop a game called RayWorld, a 2-D orthographic game in the style of old-school Pokemon. Using an older game engine written in something like C++, a tutorial like this would span over three or four series. But with the power of Flutter and the Flame engine combined, you’ll create all this in just one. You’ll need the starter project to complete this tutorial. Download it by clicking the Download Materials button at the top or bottom of the tutorial. Build and run your project in your preferred IDE. This tutorial will use Visual Studio Code. You’ll see a blank screen with a joypad in the bottom right corner: What you see here is rendered purely with Flutter; you’ll need Flame to build the rest of your components. The Flame Game Engine Flame — a lightweight game engine built on top of Flutter — gives game developers a set of tools such as a game loop, collision detection and sprite animations to create 2-D games. The Flame team has worked on releasing v1.0.0 for over a year and is on the brink of an official release. v1.0.0 has changed a lot of the core fundamentals of Flame because the team took this opportunity to apply what they’ve learned over the years and rewrite the engine. This tutorial will use Flame 1.0.0 release candidate 15. The Flame engine is modular, allowing users to pick and choose which API’s they would like to use, such as: - Flame – The core package, which offers the game loop, basic collision detection, Sprites and components. - Forge2D – A physics engine with advanced collision detection, ported from Box2D to work with Flame. - Tiled – A module for easily working with tile maps in Flame. - Audio – A module that adds audio capabilities into your Flame game. Flame harnesses the power of Flutter and provides a lightweight approach to developing 2-D games for all platforms. Setting up Your Flame Game Loop The first component you’ll set up in RayWorld is your Flame game loop. This will be the heart of your game. You’ll create and manage all your other components from here. Open your lib folder and create a new file called ray_world_game.dart, then add a new class called RayWorldGame, which extends from the Flame widget FlameGame: import 'package:flame/game.dart'; class RayWorldGame extends FlameGame { @override Future<void> onLoad() async { // empty } } Open main_game_page.dart and create an instance of your new class at the top of MainGameState: RayWorldGame game = RayWorldGame(); Now, add a GameWidget to MainGameState as the first widget in the Stack, replacing // TODO 1 with: GameWidget(game: game), Add these two imports to the top of main_game_page.dart so you can use your new logic: import 'package:flame/game.dart'; import 'ray_world_game.dart'; Right now, your game will do nothing. It needs some components to render. Time to add a playable character! Creating Your Player Add a folder in lib called components. This folder will store all your Flame components, starting with your player. Create a file in components called player.dart. In this class, set up your Player class: import 'package:flame/components.dart'; class Player extends SpriteComponent with HasGameRef { Player() : super( size: Vector2.all(50.0), ); @override Future<void> onLoad() async { super.onLoad(); // TODO 1 } } Your Player extends a Flame component called SpriteComponent. You’ll use this to render a static image in your game. You’re setting the size of the player to be 50. All components in the Flame engine have some core functionality, such as loading and rendering within the game loop they’re attached to. For now, you’ll use only onLoad. Replace // TODO 1 in Player with logic to load your player image and set the player’s initial position. sprite = await gameRef.loadSprite('player.png'); position = gameRef.size / 2; All components have access to the game loop if you add the mixin HasGameRef to your component class definition. Here, you use that game reference to load a sprite into your game with the image of player.png that’s located in your Flutter assets folder. You also set the players position to be in the middle of the game. Go back to your ray_world_game.dart file and add your new Player component as an import at the top of the file: import 'components/player.dart'; In the top of RayWorldGame, create your Player: final Player _player = Player(); In the game onLoad method, replace // empty with code to add your player into the game: add(_player); add is a super important method when building games with the Flame engine. It allows you to register any component with the core game loop and ultimately render them on screen. You can use it to add players, enemies, and lots of other things as well. Build and run, and you’ll see a little dude standing in the center of your game. Pretty exciting! Now, it’s time to get your player moving. Adding Movement to Your Player To move your player, you first need to know what direction the joypad is dragged. The joypad direction is retrieved from the Joypad Flutter widget that lives outside the game loop. The direction then gets passed to the GameWidget in main_game_page.dart. In turn, this can pass it to Player, which can react to the direction change with movement. Start with the Player. Open your player.dart file and add the import for direction: import '../helpers/direction.dart'; Then, declare a Direction variable in the top of Player and instantiate it to Direction.none: Direction direction = Direction.none; The joypad will change to either up, down, left, right, or none. With each new position, you want to update the direction variable. Open ray_world_game.dart and add a function to update the direction of your player in RayWorldGame: void onJoypadDirectionChanged(Direction direction) { _player.direction = direction; } Also add the direction import to the top of ray_world_game.dart: import '../helpers/direction.dart'; Now, head back to main_game_page.dart and replace // TODO 2 with a call to your game direction function: game.onJoypadDirectionChanged(direction); And voilà, you’ve passed a user input from a Flutter widget into your game and player components. Now that your player component knows what direction it should be moving in, it’s time to execute on that information and actually move your player! Executing on Player Movement To start acting on the information passed through to the player component, head back to player.dart and add these two functions: @override void update(double delta) { super.update(delta); movePlayer(delta); } void movePlayer(double delta) { // TODO } update is a function unique to Flame components. It will be called each time a frame must be rendered, and Flame will ensure all your game components update at the same time. The delta represents how much time has passed since the last update cycle and can be used to move the player predictably. Replace // TODO in the movePlayer function with logic to read the direction: switch (direction) { case Direction.up: moveUp(delta); break; case Direction.down: moveDown(delta); break; case Direction.left: moveLeft(delta); break; case Direction.right: moveRight(delta); break; case Direction.none: break; } movePlayer will now delegate out to other more specific methods to move the player. Next, add the logic for moving the player in each direction. Start by adding a speed variable to the top of your Player class: final double _playerSpeed = 300.0; Now, add a moveDown function to the bottom of your Player class: void moveDown(double delta) { position.add(Vector2(0, delta * _playerSpeed)); } Here, you update the Player position value — represented as an X and a Y inside Vector2 — by your player speed multiplied by the delta. You can picture your game view drawn on a 2-D plane like so: If the game view is 2500×2500 pixels in diameter, your player starts in the middle at the coordinates of x:1250, y:1250. Calling moveDown adds about 300 pixels to the player’s Y position each second the user holds the joypad in the down direction, causing the sprite to move down the game viewport. You must add a similar calculation for the other three missing methods: moveUp, moveLeft and moveRight. See if you can add these methods yourself, thinking about how your sprite moves on a 2-D plane. Need help? Just open the spoiler below. [spoiler title=”Solution”] void moveUp(double delta) { position.add(Vector2(0, delta * -_playerSpeed)); } void moveLeft(double delta) { position.add(Vector2(delta * -_playerSpeed, 0)); } void moveRight(double delta) { position.add(Vector2(delta * _playerSpeed, 0)); } [/spoiler] Run your application once more, and your little dude will move around the screen in all directions based on your joypad input. Animating Your Player Your player is moving around the screen like a boss – but it looks a bit off because the player is always facing in the same direction! You’ll fix that next using sprite sheets. What Is a Sprite Sheet? A sprite sheet is a collection of sprites in a single image. Game developers have used them for a long time to save memory and ensure quick loading times. It’s much quicker to load one image instead of multiple images. Game engines like Flame can then load the sprite sheet and render only a section of the image. You can also use sprite sheets for animations by lining sprites up next to each other in animation frames so they can easily be iterated over in the game loop. This is the sprite sheet you’ll use for your playable character in RayWorld: Each row is a different animation set and simulates moving left, right, up and down. Adding Sprite Sheet Animations to Your Player In player.dart, change your Player class extension from SpriteComponent to SpriteAnimationComponent. With this new type of component, you’ll be able to set an active animation, which will run on your player Sprite. Import the package sprite.dart. You’ll need this for setting up a SpriteSheet: import 'package:flame/sprite.dart'; Add these six new variables to your Player class: final double _animationSpeed = 0.15; late final SpriteAnimation _runDownAnimation; late final SpriteAnimation _runLeftAnimation; late final SpriteAnimation _runUpAnimation; late final SpriteAnimation _runRightAnimation; late final SpriteAnimation _standingAnimation; Replace the onLoad method with new logic to load your animations: @override Future<void> onLoad() async { _loadAnimations().then((_) => {animation = _standingAnimation}); } _loadAnimations will be an async call. This method waits for the animations to load and then sets the sprite’s first active animation to _standingAnimation. Create the _loadAnimations method and instantiate your player SpriteSheet: Future<void> _loadAnimations() async { final spriteSheet = SpriteSheet( image: await gameRef.images.load('player_spritesheet.png'), srcSize: Vector2(29.0, 32.0), ); // TODO down animation // TODO left animation // TODO up animation // TODO right animation // TODO standing animation } This code loads a sprite sheet image from your Flutter assets folder that you saw previously. The image is 116×128 pixels, and each frame is 29×32 pixels. The latter is what you’re setting the srcSize SpriteSheet parameter to. Flame will use these variables to create sprites from the different frames on your sprite sheet image. Replace // TODO down animation with logic to initialize _runDownAnimation: _runDownAnimation = spriteSheet.createAnimation(row: 0, stepTime: _animationSpeed, to: 4); This code sets up an animation that loops across the first row of the player sprite sheet from the first frame until the fourth. It’s effectively a “while” loop that repeats from 0 until less than 4, where the sprite viewport moves in 32 pixel increments across 4 rows. Using this logic, initialize the rest of your animation variables. Need help? Reveal the spoiler below. [spoiler title=”Solution”] _runLeftAnimation = spriteSheet.createAnimation(row: 1, stepTime: _animationSpeed, to: 4); _runUpAnimation = spriteSheet.createAnimation(row: 2, stepTime: _animationSpeed, to: 4); _runRightAnimation = spriteSheet.createAnimation(row: 3, stepTime: _animationSpeed, to: 4); _standingAnimation = spriteSheet.createAnimation(row: 0, stepTime: _animationSpeed, to: 1); [/spoiler] Update your movePlayer function to assign the correct animations based on the player’s direction: void movePlayer(double delta) { switch (direction) { case Direction.up: animation = _runUpAnimation; moveUp(delta); break; case Direction.down: animation = _runDownAnimation; moveDown(delta); break; case Direction.left: animation = _runLeftAnimation; moveLeft(delta); break; case Direction.right: animation = _runRightAnimation; moveRight(delta); break; case Direction.none: animation = _standingAnimation; break; } } Build and run, and you’ll see your playable character has come to life as they run in each direction. At this point, you have the fundamentals of a game in place: a playable character with user input and movement. The next step is to add a world for your player to move around in. Adding a World Create a file called world.dart in your components folder. In world.dart, create a SpriteComponent called World and load rayworld_background.png as the world sprite: import 'package:flame/components.dart'; class World extends SpriteComponent with HasGameRef { @override Future<void>? onLoad() async { sprite = await gameRef.loadSprite('rayworld_background.png'); size = sprite!.originalSize; return super.onLoad(); } } Head back to RayWorldGame and add World as a variable under Player: final World _world = World(); Make sure to add the World import, too: import 'components/world.dart'; Now, add _world to your game at the beginning of onLoad: await add(_world); You must load the world completely before loading your player. If you add the world afterward, it will render on top of your Player sprite, obscuring it. Build and run, and you’ll see a beautiful pixel landscape for your player to run around in: For your player to traverse the world properly, you’ll want the game viewport to follow the main character whenever they move. Traditionally, when programming video games, this requires a plethora of complicated algorithms to accomplish. But with Flame, it’s easy! At the bottom of your game onLoad method, set the player’s initial position the center of the world and tell the game camera to follow _player: _player.position = _world.size / 2; camera.followComponent(_player, worldBounds: Rect.fromLTRB(0, 0, _world.size.x, _world.size.y)); Add the import for using a Rect variable at the top of the file: import 'dart:ui'; Build and run, and you’ll see your world sprite pan as your player moves. As you’ve set the worldBounds variable, the camera will even stop panning as you reach the edge of the world sprite. Run to the edge of the map and see for yourself. Congratulations! You should be proud of yourself for getting this far. You’ve covered some of the core components needed in any game dev’s repertoire. However, there’s one final skill you must learn to be able to make a full game: Collision detection. Adding World Collision to Your Game Creating Tile Maps 2-D game developers commonly employ tile maps. The technique involves creating artwork for your game as a collection of uniform tiles you can piece together however needed like a jigsaw, then creating a map you can use to tell your game engine which tiles go where. You can make tile maps as basic or as advanced as you like. In a past project, a game called Pixel Man used a text file as a tile map that looked something like this: xxxxxxxxxxx xbooooooox xoooobooox xoooooooox xoooooboox xxxxxxxxxxx The game engine would read these files and replace x’s with walls and b’s with collectable objects, using the tile map for both logic and artwork purposes. These days, software makes the process of creating a tile map a lot more intuitive. RayWorld uses software called Tiled. Tiled is free software that lets you create your levels with a tile set and add additional collision layers in a graphical editor. It then generates a tile map written in JSON that can be easily read in your game engine. A tile map called rayworld_collision_map.json already exists. You’ll use this JSON file to add collision objects into your game in the next section. It looks like this in the Tiled editor: The pink boxes are the collision rectangles. You’ll use this data to create collision objects in Flame. Creating World Collision in RayWorld Add a file in your components folder called world_collidable.dart and create a class called WorldCollidable: import 'package:flame/components.dart'; import 'package:flame/geometry.dart'; class WorldCollidable extends PositionComponent with HasGameRef, Hitbox, Collidable { WorldCollidable() { addHitbox(HitboxRectangle()); } } This component adds Hitbox and Collidable mixins that tells Flame to make this component collidable based on the HitboxRectangle that you’ve added. This hitbox will match the size of the position component with a collidable box. You’ll add many of these components to your game — one for each rectangle loaded by the collision map. Create a method in RayWorldGame called addWorldCollision: void addWorldCollision() async => (await MapLoader.readRayWorldCollisionMap()).forEach((rect) { add(WorldCollidable() ..position = Vector2(rect.left, rect.top) ..width = rect.width ..height = rect.height); }); Here, you use a helper function, MapLoader, to read rayworld_collision_map.json, located in your assets folder. For each rectangle, it creates a WorldCollidable and adds it to your game. Call your new function beneath add(_player) in onLoad: addWorldCollision(); Add the HasCollidables mixin to RayWorldGame. You’ll need to specify this if you want Flame to build a game that has collidable sprites: with HasCollidables Add the imports for map_loader, world_collidable and components at the top of your file: import 'components/world_collidable.dart'; import 'helpers/map_loader.dart'; import 'package:flame/components.dart'; You’ve now added all your collidable sprites into the game, but right now, you won’t be able to tell. You’ll need to incorporate additional logic to your player to stop them from moving when they’ve collided with one of these objects. Start by going to your Player class and adding the Hitbox and Collidable mixins after with HasGameRef next to your player class declaration: class Player extends SpriteAnimationComponent with HasGameRef, Hitbox, Collidable You now have access to onCollision and onCollisionEnd. Add them to your Player class: @override void onCollision(Set<Vector2> intersectionPoints, Collidable other) { // TODO 1 } @override void onCollisionEnd(Collidable other) { // TODO 2 } Create and add a HitboxRectangle to your Player in the constructor. Like your WorldCollision components, your player needs a Hitbox to be able to register collisions: Player() : super( size: Vector2.all(50.0), ) { addHitbox(HitboxRectangle()); } Add the geometry and WorldCollidable imports above your class: import 'package:flame/geometry.dart'; import 'world_collidable.dart'; Now, add two variables into your Player class to help track your collisions: Direction _collisionDirection = Direction.none; bool _hasCollided = false; You can populate these variables in the two collision methods. Go to onCollision and replace // TODO 1 with logic to collect collision information: if (other is WorldCollidable) { if (!_hasCollided) { _hasCollided = true; _collisionDirection = direction; } } Set _hasCollided back to false in onCollisionEnd, replacing // TODO 2: _hasCollided = false; Player now has all the information it needs to know whether it has collided or not. You can use that information to prohibit movement. Add these four methods to your Player class: bool canPlayerMoveUp() { if (_hasCollided && _collisionDirection == Direction.up) { return false; } return true; } bool canPlayerMoveDown() { if (_hasCollided && _collisionDirection == Direction.down) { return false; } return true; } bool canPlayerMoveLeft() { if (_hasCollided && _collisionDirection == Direction.left) { return false; } return true; } bool canPlayerMoveRight() { if (_hasCollided && _collisionDirection == Direction.right) { return false; } return true; } These methods will check whether the player can move in a given direction by querying the collision variables you created. Now, you can use these methods in movePlayer to see whether the player should move: void movePlayer(double delta) { switch (direction) { case Direction.up: if (canPlayerMoveUp()) { animation = _runUpAnimation; moveUp(delta); } break; case Direction.down: if (canPlayerMoveDown()) { animation = _runDownAnimation; moveDown(delta); } break; case Direction.left: if (canPlayerMoveLeft()) { animation = _runLeftAnimation; moveLeft(delta); } break; case Direction.right: if (canPlayerMoveRight()) { animation = _runRightAnimation; moveRight(delta); } break; case Direction.none: animation = _standingAnimation; break; } } Rebuild your game and try to run to the water’s edge or into a fence. You’ll notice your player will still animate, but you won’t be able to move past the collision objects. Try running between the fences or barrels. Bonus Section: Keyboard Input Because RayWorld is built with Flutter, it can also run as a web app. Generally, for web games, people want to use keyboard input instead of a joypad. Flame has an interface called KeyboardEvents you can override in your game object to receive notification of keyboard input events. For this bonus section, you’ll listen for keyboard events for the up, down, left and right arrows, and use these events to set the player’s direction. Start by adding the mixin KeyboardEvents to the end of your RayWorldGame class declaration, next to HasCollidables. Add the input import above RayWorldGame: import 'package:flame/input.dart'; Now, override the onKeyEvent method: @override KeyEventResult onKeyEvent(RawKeyEvent event, Set<LogicalKeyboardKey> keysPressed) { final isKeyDown = event is RawKeyDownEvent; Direction? keyDirection = null; // TODO 1 // TODO 2 return super.onKeyEvent(event, keysPressed); } Replace // TODO 1 with logic to read RawKeyEvent and set the keyDirection: if (event.logicalKey == LogicalKeyboardKey.keyA) { keyDirection = Direction.left; } else if (event.logicalKey == LogicalKeyboardKey.keyD) { keyDirection = Direction.right; } else if (event.logicalKey == LogicalKeyboardKey.keyW) { keyDirection = Direction.up; } else if (event.logicalKey == LogicalKeyboardKey.keyS) { keyDirection = Direction.down; } Here, you are listening for key changes with the keys W, A, S and D and setting the corresponding movement direction. Now, replace // TODO 2 with logic to change the player’s direction: if (isKeyDown && keyDirection != null) { _player.direction = keyDirection; } else if (_player.direction == keyDirection) { _player.direction = Direction.none; } The player’s direction is being updated if a key is being pressed, and if a key is lifted the players direction is set to Direction.none if it is the active direction. Launch your game on the web or an emulator, and you’ll now be able to run around using the W, A, S and D keys on your keyboard. Where to Go From Here? You can download the completed project files by clicking the Download Materials button at the top or bottom of the tutorial. You now have all the tools to make a complete 2-D game using the Flame Engine. But why stop there? You could try adding: - More game UI: Incorporate UI elements such as a player health bar, an attack button and a jump button. You could build these using a Flame component or a Flutter Widget. - Enemies: Populate RayWorld with enemies such as goblins or aggressive animals that could attack your player. - Different levels: Load new world sprites and tile maps into your game as the player leaves the area. Check out the awesome-flame GitHub repository to see what games have already been developed using the Flame Engine and to read some other great Flame tutorials. As Flame v1.0.0 edges closer to an official release, there’s sure to be plenty of new and exciting game development API’s that take advantage of the Flutter ecosystem. Stay tuned to raywenderlich.com for more great game development tutorials as the release is rolled out!
https://www.raywenderlich.com/27407121-building-games-in-flutter-with-flame-getting-started
CC-MAIN-2021-49
refinedweb
4,024
57.37
Returns an iterator of lists. The order of values in the list matches the order of fields specified by the field_names argument. UpdateCursor objects can be iterated using a for loop and it supports with statements. Using a with statement will ensure that the database locks are removed. You can use both an insert and update cursor at the same time if an edit session is opened. Example: from arcpy import da fc = r"c:\temp\samples.gdb\nests" fields = ('BirdPop', 'Rank') with da.UpdateCursor(fc, fields) as cursor: for row in cursor: if (row[0] >= 0 and row[0] <= 10): row[1] = 1 elif (row[0] > 10 and row[0] <= 20): row[1] = 2 elif (row[0] > 20 and row[0]<= 30): row[1] = 3 elif (row[0] > 20): row[1] = 4 cursor.updateRow(row) In this simple example, you see that you can use conditional statements to change values within the 'with' statement. Enjoy
https://anothergisblog.blogspot.in/2012/06/
CC-MAIN-2018-09
refinedweb
156
64.61
It is somehow the second part of a series about .NET Core. While in the first part (.NET Core: compile once, run everywhere), we learned the basic fact, how .NET Core can be used on different platforms, in this part, I want to extend the idea to the hosting of the same web application on different platforms... You will find several samples of ASP.NET Core 'Getting started'. I created my version because I found those incomplete, or even in contrary to the idea of cross-platform... Some were focusing only on the code and talking nothing about hosting, others solving the hosting problem from within the project, which automatically makes the project non cross-platform, or explaining the hosting only for a single platform... In his article, I will focus less on the code (the code sample will be very simple and short) but more on how hosting on different platforms works... In this article, I use the same setup as in the first part (link above). If you didn't go through the setup process, please open the article and follow it up to the 'The True Magic' section (not included). To handle all the code, I will use the VS Code editor and its integrated terminal, best to follow at this point... For the first step (creating the project), open the integrated terminal. View->Integrated Terminal or Ctrl+`, and run these commands to create a new .NET Core project... mkdir aspcore cd aspcore dotnet new Now use the File->Open Folder menu to open your project just created. You will see two files on the left-side bar, open project.json to add new dependencies to the ASP.NET Core HTTP Server (Kestrel) and to IIS integration. The final result should look like this: { "version": "1.0.0-*", "buildOptions": { "debugType": "portable", "emitEntryPoint": true }, "dependencies": {}, "frameworks": { "netcoreapp1.0": { "dependencies": { "Microsoft.AspNetCore.Server.IISIntegration": "*", "Microsoft.AspNetCore.Server.Kestrel": "*" "Microsoft.NETCore.App": { "type": "platform", "version": "1.0.1.*" }, }, "imports": "dnxcore50" } } } The next step is to refresh the dependencies (download the DLLs) you just declared... For that, you have two options... If VS Code already popped-up a notification like below, you may chose 'Restore' and wait for the download to finish... If you prefer the command line, go back to the integrated terminal and run dotnet restore... dotnet restore Like in any normal console application, the main function is the entry point, but instead of actually running something, now it starts the HTTP host, which will listen to the requests... Open Program.cs and modify it to look like this: using Microsoft.AspNetCore.Hosting; namespace WebApi { public class Program { public static void Main(string[] args) { var oHost = new WebHostBuilder() .UseKestrel() .UseIISIntegration() .UseStartup<Startup>() .Build(); oHost.Run(); } } } UseKestrel UseIISIntegration UseStartup This class provides the configuration information for your application and for the hosting environment, by defining the Configure (must have) and ConfigureServices (optional) methods. Configure ConfigureServices Let's see our sample, and then some explanations... Add a new file named Startup.cs to your project and copy the code from below into... using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; namespace WebApi { public class Startup { public void ConfigureServices(IServiceCollection Services) { Services.AddMvcCore() .AddJsonFormatters(); } public void Configure(IApplicationBuilder App) { App.UseMvc( Routes => { Routes.MapRoute( name: "Default", template: "{controller=Home}/{action=Index}/{id?}" ); } ); } } } If you ever worked with Web API or MVC applications in Visual Studio, this code will be very familiar, but even without it, it is easy to understand... In the ConfigureServices method, I load the MVC Core layer (which enables us to use controllers and models in an easy way), and add to it the JSON formatter as I prefer it over XML... The Configure method initializes the MVC just loaded and adds to it a routing map with some defaults... This part will not compile until you add the dependencies for the classes we just used here. So add the two lines from below to project.json and run dotnet restore command again from the integrated terminal... "Microsoft.AspNetCore.Mvc.Core": "*", "Microsoft.AspNetCore.Mvc.Formatters.Json": "*" In this part, we will add a controller that will provide a list of users or a single user if ID is known... You need to add a folder named Models with a file named Users.cs in it, and another folder named Controllers with a file named UsersController.cs in it. namespace WebApi { public class User { public int ID { get; set; } public string Name { get; set; } } } using Microsoft.AspNetCore.Mvc; using System.Collections.Generic; using System.Linq; namespace WebApi { [Route("/api/users")] public class UsersController { // some data private static List<User> _Users = new List<User>(new[] { new User() { ID = 1, Name = "Kornfled" }, new User() { ID = 2, Name = "Eliyahu" }, new User() { ID = 3, Name = "Peter" }, }); [HttpGet] public IEnumerable<User> Get() { return( _Users ); } [HttpGet("{id}")] public IActionResult Get(int ID) { User oUser = _Users.FirstOrDefault(User => User.ID == ID); if (oUser == null) { return(new NotFoundResult()); } return(new OkObjectResult(oUser)); } } } Now we have all the code and configuration in place, so time to check it... Open the integrated terminal and run dotnet run, when it is up and running open your browser and test, using these URLs... dotnet run localhost:5000/api/users localhost:5000/api/users/2 You should get answers like this... [{"id":1,"name":"Kornfled"},{"id":2,"name":"Eliyahu"},{"id":3,"name":"Peter"}] and this... {"id":2,"name":"Eliyahu"} The very last, code-related, step is to publish the code we just created... For that, use the integrated terminal again, and type in the dotnet publish -c Release command... The answer should look like this: dotnet publish -c Release publish: Published to /home/peter/Applications/aspcore/bin/Release/netcoreapp1.0/publish Published 1/1 projects successfully The result is a long list of files that you need to run your application. This folder can now be moved around between machines (with .NET Core installed) and run on each and every platform supported... To check it, all you need is run dotnet run from the published folder and navigate to localhost:5000/api/users with your browser. It will work the same on Linux and Windows too (I do not mention Mac because I have none to actually test, but based on Microsoft, it should go the same there). Now, that we have the cross-platform web application, we can talk about the main subject we come for... Hosting... You probably were thinking: Hold your horses! We already have all we need! We have a web server (Kestrel) already part of our application, up and running. Let's expose it to our users and start the feast! All the good things of Kestrel (size, speed, cross-platform) come at a prize! Kestrel is not build for the prime-time. It does not have functionalities one should expect from a true web server. So Kestrel is out of question to face the real word (but perfectly good for testing in the development cycle and doing the actual hosting behind some real web server). Reverse proxy. It is a setup that will completely hide the fact that we have something else, than the exposed web server. Gives us the possibility to use any security measurement we have in the web server and keep our application separated from the underlying OS and keep it cross-platform... Also, enable to run numerous ASP.NET Core sites under the same address/port combination, like they were part of the same site... Of course, this is the point where we say goodbye to cross-platform, as every web service will have its own solution to be a reverse proxy server, but that's all about settings and configurations - we will not touch the code no more! In Windows, we will use IIS as a reverse-proxy server for Kestrel. I do not see any reason to use Apache as both are free and good, but IIS feels much better on Windows... The first step is to install a special IIS module, that was created exactly for our needs - the AspNetCoreModule... ASP.NET Core Server Hosting Bundle. If you already installed .NET Core SDK you have it, so skip the installation. To see if all is in place, go to the IIS Manager and check the list of the modules installed... AspNetCoreModule This module not only will handle the request redirection, but more importantly will take care for our application up and running all the time. style="width: 640px; height: 166px" alt="Image 2" data-src="/KB/aspnet/1117251/iis_modules.png" class="lazyload" data-sizes="auto" data-> This module is not only native but very low level, so it will intercept all requests and anything seems to fit will be redirected to ASP.NET Core process, even request originally would go to other handlers, like ASPX... The solution is to set up a separate application pool. As this pool will server as a simple proxy, we need no .NET runtime so set .NET CLR version to 'No Managed Code'. The next step is create a new application that points to the folder where your published ASP.NET Core code sits... width="532px" alt="Image 4" data-src="/KB/aspnet/1117251/iis_application.png" class="lazyload" data-sizes="auto" data-> There will be issues with the security. The default pool identity (ApplicationPoolIdentity) has no right to run an executable (dotnet.exe) from the IIS, so you have to change that to some user has those rights. For the testing, I changed it to LocalSystem, but in a true production environment, you should contact your system administrator to create a more tailored user... ApplicationPoolIdentity LocalSystem style="width: 450px; height: 550px" alt="Image 5" data-src="/KB/aspnet/1117251/iis_poolidentity.png" class="lazyload" data-sizes="auto" data-> The last step is to create a web.config file in the root of your application with this content (this configuration file actually can be part of your project and published with it to every platform too - it will not harm anyone): <?xml version="1.0"?> <configuration> <system.webServer> <handlers> <add name="ASP.NET Core" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" /> </handlers> <aspNetCore processPath="dotnet" arguments=".\aspcore.dll" /> </system.webServer> </configuration> If you want to see logs form the .NET Core environment (it is a good idea if you have problems to start the dotnet executable), you should change the aspNetCore line to this: aspNetCore <aspNetCore processPath="dotnet" arguments=".\aspcore.dll" stdoutLogEnabled="true" stdoutLogFile=".\stdout.log" /> Now open your browser and navigate to... You also can test it from another computer on the same network using IP/machine name, or from the outside world if your computer is truly online... On Linux systems, there is no such help we got with IIS. No integration module to handle the dotnet executable process, so the first step we have to take is to ensure that our application is up and running... Fedora (and most of the other Linux versions) use systemd as a framework to run different services (daemons in the Linux world). What we have to do is create a description file for our application and from that point on, it will run after every system restart too... systemd As the first step, I moved the published application to a more convenient path... In my case, it was from /home/peter/aspcore/bin/Release/netcoreapp1.0/publish to /home/peter/aspcoreapp. Now you need to open a terminal and create the description file. The command for that is: sudo gedit /etc/systemd/system/dotnet-aspcoreapp.service The content of the file should look like this: [Unit] Description=Sample ASP.NET Core Web API Application [Service] ExecStart=/usr/local/bin/dotnet /home/peter/aspcoreapp/aspcore.dll Restart=always RestartSec=10 SyslogIdentifier=dotnet-aspcoreapp User=root Environment=ASPNETCORE_ENVIRONMENT=Production [Install] WantedBy=multi-user.target Now run these 3 commands to enable, start and verify the new service... sudo systemctl enable dotnet-aspcoreapp.service sudo systemctl start dotnet-aspcoreapp.service sudo systemctl -l status dotnet-aspcoreapp.servicesud If all is well, you should get an output like below for the status: ● dotnet-aspcoreapp.service - Sample ASP.NET Core Web API Application Loaded: loaded (/etc/systemd/system/dotnet-aspcoreapp.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2016-11-16 10:08:06 IST; 8min ago Main PID: 749 (dotnet) CGroup: /system.slice/dotnet-aspcoreapp.service └─749 /usr/local/bin/dotnet /home/peter/aspcoreapp/aspcore.dll Now this service entry will start our application after restart of the OS too, or after crash of any kind... The nginx web server is one of the most popular web services for Linux. It has grown in popularity, mainly because of its weight and scaling... The first step is to install nginx using the terminal... install nginx sudo dnf install nginx sudo service nginx start Now you can see if it indeed works by navigating to in your browser... style="width: 640px; height: 80px" alt="Image 6" data-src="/KB/aspnet/1117251/nginx_welcome.png" class="lazyload" data-sizes="auto" data-> The next step is to configure nginx to forward incoming request to our application. For that, we have to edit nginx's configuration file... nginx sudo gedit /etc/nginx/nginx.conf The original looks like this: server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } After the changes, it should look like this: server { listen 80 default_server; listen [::]:80 default_server; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } location /kestrel { proxy_pass; } The second location definition will redirect every request under the kestrel folder to our ASPNET Core application's api folder. So if previously, we browsed for, now we browse for... and this address is accessible also from outside the server and can be secured and so on... location The last bit is to enable the redirecting between the two servers. For that, we have to set a property of SELinux (a security layer in Linux) that enables that kind of connection... and reboot... sudo setsebool httpd_can_network_connect on -P reboot Note: This part does NOT continue the pervious about nginx but parallel/replacement to it! While nginx is a strong player, apache still holds the majority of web servers and it is entirely possible you will have it installed on your target server already (as you have it on Fedora), so let's see how to configure it for Kestrel... To enable and run apache server, use these commands: sudo systemctl enable httpd.servic sudo systemctl start httpd.service If you navigate now to, you should get a page like this: style="width: 640px; height: 204px" alt="Image 7" data-src="/KB/aspnet/1117251/apache_welcome.png" class="lazyload" data-sizes="auto" data-> The next step is to edit the configuration file to define the proxy settings we need... sudo gedit /etc/httpd/conf/httpd.conf At the end of that file, add these lines: <VirtualHost *:80> DocumentRoot /home/peter/aspcoreapp ProxyPreserveHost On ProxyPass /kestrel/ ProxyPassReverse /kestrel/ </VirtualHost> The /home/peter/aspcoreapp part is the path where your published application is sitting, and must be the same as the one you used in the daemon definition file... The last bit - just like before - is to enable the redirecting between the two servers. For that, we have to set a property of SELinux (a security layer in Linux) that enables that kind of connection... and reboot... I'm a long time web developer, both front and back end. For a long time, while struggling with browser support of standards, I did not have the spare time nor the desire to do the same with the server side. As of today, I see a big opportunity to build applications that can target more and more customers without the need of forcing specific OS on them first... Today, I can focus on the code - all in one place and leaving the configuration around to some administrator. A great feeling! :-) This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Quote:If you navigate now to, you should get a page like this: General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1117251/ASP-NET-Core-Compile-Once-Host-Everywhere
CC-MAIN-2019-39
refinedweb
2,730
56.45
Hello, this my first thread and I need some supper fast help I'm on a deadline... I have applied for a Microsoft program but part of the application process is to complete this assignment... You are to create a console application that accepts exactly one command-line argument. If it doesn’t receive the argument, the application must display an error message and exit. The application must parse the text input and output the number of times each letter of the alphabet occurs in the text. Case sensitivity is not required. For example, if the command-line argument is “baaad” the displayed result must be: There are 3 A's There are 1 B's There are 0 C's There are 1 D's There are 0 E's There are 0 F's etc... I've gotten this so far... <html> <head> <script type="text/javascript"> function count_char($str,1) { return ("count_char($str,1); } <?php $ echo(count_chars($str,1)) </script> <body> <form> <input type="text" value="Please enter string" /> <input type="button" value="Click me!" onclick="count_chars($str,1))" /> </form> <p>By clicking the button above the letters in the string will be counted and displayed</p> </body> </html> But I know for sure I messed something up/left something out because whenever I try to run the code, clicking the button doesn't do anything, Please Help Me!!! I started learning from w3schools.com but they don't tell you exactly how to assemble the Java Script code I have until 6/25/10 to submit my code to my application overseer.
https://www.daniweb.com/programming/web-development/threads/292116/urgent-java-script-help-needed
CC-MAIN-2017-34
refinedweb
266
61.87
And so it begins - the first closures puzzler Neal Gafter has posted the first closures puzzler. I guess the second edition of Java Puzzlers is in works, and closures will be a hefty addition to the book. If anything, this makes me really sad. I was very excited to lay my hands on the first edition, but after reading through a few chapters, i skimmed the table of contents, glanced at the visual illusions and never came back to it. If i had to sum this book in one sentence, it would be "great optical illusions and irrelevant content". I know, this is a harsh statement, and i have nothing but respect to both authors (in fact, i consider "Effective Java" to be the book that i would take with me to a deserted island on a condition that it has WiFi access). I've been using Java for the last 8 years, most of it exclusively (amounting to about 10-12 hours a day, including the work and the side projects). I've written my share of new code, and i most certainly have seen my share of old code that i had to maintain, fix and extend. But never once have i encountered anything even remotely connected to any Java puzzler presented in the book and at the conferences. Bit-level shifts, integer overflows, overloaded methods, reflection, generics - you name it. I've looked at the examples, i've tried to match them to the real code that i see during the day, and i have found nothing. Of course, i am looking at the tiniest sliver of Java code that exists in the enterprise sector, but somehow i get an impression that i'm not alone. I would even go as far as to say that a feature implemented in a way that results in at least one puzzler is not a feature worth having in the language. Which brings me to the quote of the day. It comes from an announcement on the end-of-life for Ruby.NET: As a researcher, my prime interest is not in developing products, but in developing innovative new ideas and having an impact by having those ideas used in the real world. The generics were added to the language in a very incomplete manner, mainly due to the restrictions on binary compatibility. If we let the researchers in us (even if they are the best researchers) to do the same with closures, i'll pass. - Login or register to post comments - Printer-friendly version - kirillcool's blog - 2648 reads by 5er_levart - 2008-02-10 12:56@hlovatt, now add a "CHERRY" to the Flavor enum... by hlovatt - 2008-02-10 23:26 @ 5er_levart, In the difficult recursive cases the two enums will always be instep with an exact one to one correspondence. For example, the example doesn't work if you don't add CHERRY_RED s well as CHERRY - otherwise what does RED return. When there is not an exact one to one between two classes you are unlikely to have a problem. The whole example is fairly contrived, though not impossible. by hlovatt - 2008-02-10 01:45@Remi, Thanks for the advice - I was not aware of that but it is correct that it does so - I didn't think that one through. Best make a local, static, copy then: enum Flavor { CHOCOLATE, STRAWBERRY, VANILLA; private static final Color[] colors = Color.values(); Color color() { return colors[ ordinal() ]; } } by forax - 2008-02-08 02:13 @hlovatt , beware of values(), it makes a defensive copy. So the array is cloned each time flavor() or color() is called. Rémi by hlovatt - 2008-02-07 20:37 Personally I would be tempted to go with:enum Color { BROWN, RED, WHITE; Flavor flavor() { return Flavor.values()[ ordinal() ]; } } enum Flavor { CHOCOLATE, STRAWBERRY, VANILLA; Color color() { return Color.values()[ ordinal() ]; } } Rather than use closures, which seems an overly complicated solution to my eyes. by robertotyley - 2008-02-07 07:22 I really appreciate the puzzle. The first thing I thought when I saw Neal's post was that it was odd he was attempting to demonstrate how closures could introduce new puzzlers - and my second thought was 'I bet he's up to something'. Of course, the joke is that closures behave exactly as expected - Neal is making a case for closures, and getting people to download and play with his prototype implementation to boot! I especially liked that 5er_levart came up with a solution to the problem using closures themselves... good stuff! Roberto by mthornton - 2008-02-07 02:14@jwenting: According to a post on the Concurrency interest mailing list by Doug Lea, it was the parallel api use case that came first and prompted the closure proposal. by 5er_levart - 2008-02-07 01:01@jwenting - I didn't know C was a language "designed around closures/function pointers" :). Also, I think closures and function pointers are two entirely different things and we don't mix/compare them. I don't see the closures proposal as "retrofiting" anything to Java language. Closures are a natural fit for Java. Java is just missing them. Java will not loose anything with closures. People will still be writing procedural/OO code where they see it fit and more natural (the majority of code). And people will resort to closures/function types where they find anonymous inner classes that they write today awkward to use and difficult to read. Tons of code will still be written in Java and will have to be maintained, but that code will be more readable with closures in places where anonymous inner classes would make it obfuscated. I see you are skeptical about it. I challenge you to "fix" the problem with cross-referencing enums without using closures and post the solution here. Let's compare it with the "fix" I posted above and see the "obfuscation" level of the fixes. I'm not suggesting that this is an example of real-world code. But the problem of "circularity" of dependencies does pop-up quite often in real-world and is not being tied to static class initialization only. Closures can help here making code more readable. I sense some king of "fear of closures" in the posts. They are actually quite harmless and compared to generics quite easy to grasp. Just try them (the prototype) and you'll see it doesn't take you a minute to start using them without reading the spec. by jwenting - 2008-02-06 23:58" Swing wasn't retrofitted with generics in JDK 5.0, what makes you think that they'll retrofit it with closures in JDK 7.0? " Or to use a more vexing example, Swing wasn't retrofitted with ArrayLists when those were introduced in Java 1.1, it uses Vector to this day despite Vector being effectively deprecated. Closures/function pointers are nice in languages designed around them, languages like C and Ruby. Retrofitting them into Java (or any language that's not designed around them) however is a very bad idea. You're not going to be retrofitting your Phillips screwdriver to work as a torque wrench, are you? And worse, the entire idea was launched to please people who screamed that Java is dead unless closures are added, purely for marketing reasons. The "use cases" were made up after the fact and are so obscure that it's quite clear that hardly anyone will have real use for them, but they'll be massively abused, creating tons of work for people having to maintain the utterly unreadable and obfuscated code that will result. by cowwoc - 2008-02-06 21:47@krill: The generics were added to the language in a very incomplete manner, mainly due to the restrictions on binary compatibility. If we let the researchers in us (even if they are the best researchers) to do the same with closures, i'll pass. Amen to that! If we should be doing anything in Java7 it is to finally fix Generics. by jkuhnert - 2008-02-06 16:20@mikaelgrev He probably does - but it doesn't mean that his proposal is correct. All I can do is look at what closures are in other languages (and the subsequent programming possibilities) and look at CICE and say it doesn't match what everyone else calls a closure. by mikaelgrev - 2008-02-06 11:30@ jkuhnert You are sopt on, my argument is not technical, it is from a human/statistics perspective. Technically I am out of my league compared to Neal... by jkuhnert - 2008-02-06 11:13@mikaelgrev: I have no idea how to respond to your last statements as they are more about people and less about the technical merits of the proposal....I'm much worse at being "human" and would probably tend to say "f-em if they don't like it" because I'm an arrogant jerk...it seems like lots of java developers are getting in to dynamic languages just fine to me though. Going with the "opinion based on evidence" route the current industry buzz surrounding NBL / Groovy / JRuby / Ruby / Scala all seem to indicate that java developers are capable of not being mediocre after all. Which - in theory - supports the idea that at least some segment might welcome it. by mikaelgrev - 2008-02-06 11:05@Ricky > If it was average programmers designing the language, we'd be in a lot worse trouble! 100% true. But the target programmers should be average Joe (mathematically more correct called "Median Joe"). Empathy (the ability to see what someone else is feeling/thinking) is not proportional to intelligence. What I mean is that it is not implied. That is why it is important to let great, smart and highly educated people like James, Neal and Josh construct proposals and let us lesser beings be the executive judges. (Rant mode on). Apple has understood as of late how things should be run. Let designers who are in touch with the audience decide what the engineers should construct. In language design I get the feeling that it is the engineers that run the show because the guys in touch with the audience does not have a PhD. by mikaelgrev - 2008-02-06 10:54@ jkuhnert No, Java developers are pretty smart, even smarter than the president I'd say! ;) Seriously, no, developers are in average not less or more smart depending on language. But, when you learn a new language you study it deeply and adjust your way of doing things to the tools provided by that language. Then, people are different. Some like few characters, and may choose perl, some like it wordy and go for xxx. Control types go for pure assembler. Java has a big developer base and many of these developers have gotten used to do things in a certain way. It always takes a bigger efforts to re-learn something than to learn something new from the beginning. This is why I think that we should just enhance the Java way and not add a completely new paradigm of doing things. The most obvious argument against that last sentence is "those who are not willing to re-learn should not be developers". True, maybe, but we live in a reality where developers will be upset that they must go back to school, so to say. Think of generics, it is a much lesser leap than BGGA closures, yet people bitch and moan about it. People that want to abandon Java because the lack of closures should do so. Honestly, it is better for all, and I say that without disrespect. Scala is a good alternative but Groovy and Ruby are as well. I do not understand why leaving Java is such a big deal. Those who want closures, continuations or lesser type stringency should choose a language that fits their needs better. There is no need to take Java to where they want to be. Java is all about the API anyway. Scala/Groovy with the JDK API is still 90% Java to me. I see nothing negative with using several languages. Java is not holy. Use whatever fits your needs. In conclusion, the geek within me say "go for BGGA" but the strategist within say "go for least friction unless you have good will to pay with". For things like this I tend to listen more on the strategist within (and satisfy the geek by doing yet another late night project on something cool ;) by ricky_clarkson - 2008-02-06 10:51mikaelgrev: There is usually a gap between language designers and average programmers.. if that were not the case languages would be a lot more cludgy, because it takes a good programmer to design a general-purpose language well. If it was average programmers designing the language, we'd be in a lot worse trouble! by jkuhnert - 2008-02-06 09:42@mikaelgrev So - are you saying that generally most ruby or programmers are generally smarter than your average java developer and that's why it's bad? Ie Java is intended for mediocre programmers only? Many of us lowly java programmers are getting very close to abandoning the language entirely because of a lack of support for eliminating repetitive code all over the place - something that this closures proposal would certainly help. I would hope that we'd try and let the community "live up to our expectations" vs. just assuming that they are all too dim witted to get it. by mikaelgrev - 2008-02-06 09:34> Do you think it's possible these guys are a little more experienced with language design than your average experienced java developer? I do, but I think that is part of the problem and not a defence. They are intellectually and educationally too far from the median of the 5.000.000 Java developers out there. by mikaelgrev - 2008-02-06 15:18@ jkuhnert - I have a feeling that Joshua Bloch knows how to define a closure better than both of us... Read the CICE proposal, note the headline: "Concise Instance Creation Expressions: Closures without Complexity" by prunge - 2008-02-06 14:57 would go for the collections API too. People dreaming of dropping iterators and for-loops when dealing with collections and using closures instead will be disappointed. All the collections are interfaces, and therefore methods to support closures can't be added without breaking backward compatibility. by 5er_levart - 2008-02-06 14:56C'mon guys. I liked the puzzler. I happen to enjoy testing my knowledge with puzzlers like that one. It's not that I would like to create cross referencing enums in my programs on a daily basis, but it's useful to know that the program in this puzzler can be modified with not a single drop of sweat so that it does not suffer from the unexpected behaviour. An guess what? This can be accomplished elegantly using; } } by jkuhnert - 2008-02-06 08:23@jwenting: Considering Neal's experience and James having helped to create Java what makes you think your opinion is really "definitely" correct on the closures proposal? Do you think it's possible these guys are a little more experienced with language design than your average experienced java developer? by jkuhnert - 2008-02-06 14:03@mikaelgrev: those aren't closures - they're anonymous inner classes... by mikaelgrev - 2008-02-06 13:54Btw, I am for closures, the CICE kind (with ARM blocks). by jkuhnert - 2008-02-06 13:22yeah, I'm with thiagosc. This debate is getting boring now. Can't wait to see it added to the jdk. by vieiro - 2008-02-06 13:13So, where're the closures? As far as I can see there're only function pointers there. Is that all? Can anybody post a real example of a Java closure, please? Thanks, Antonio by kirillcool - 2008-02-06 12:50 is exactly the point that i address in my next entry. Nothing will be gone, most certainly not at the language level. Do you really think that once closures are in JDK 7.0, the support for inner anonymous classes will be removed from the language? Swing wasn't retrofitted with generics in JDK 5.0, what makes you think that they'll retrofit it with closures in JDK 7.0? by thiagosc - 2008-02-06 12:44 Sometimes I think Sun listens too much to their target audience. They don't know what they need and it's your job to find it out before the competition does and provide it. Just put the damned closures there and don't ask questions. People will bitch and moan even if you do nothing, just by the simple fact that it has become fashionable to bash Java even for the silliest reasons. If no addition is made then Java is dead, if things are improved then Java is too complex. On the other hand Microsoft transforms C# into a Frankenstein and the same bloggers applaud it. Just. by jwenting - 2008-02-06 07:40so even the biggest proponent of closures (read, function pointers) has to admit that they're bad, unintuitive, obscure, and lead to obfuscated and error prone code. (That's Neal obviously, not Kiril). Now if Neal were only enough of a sport to publicly retract his proposal and his advocacy for the entire monstrosity maybe some sanity could be restored to the JCP and the direction of the development of the Java language. by imichtch - 2008-02-06 07:24Yes! Indeed how could Kirill write anything against bad things without even doing them? :-) What about the professional intuition (which normally comes with professional experience)? by ricky_clarkson - 2008-02-06 05:58Kirill: I can't understand why you would blog against this puzzler (and specifically against closures) without having tried it first. You might as well blog saying "XYZ doesn't interest me" and leave it there. I'd like to understand why, because I used to like reading your stuff, specifically about Swing. by imichtch - 2008-02-06 05:20Hi Kirill, I wonder wether you have ever thought about closures during your last 8 years... ;-) by mthornton - 2008-02-06 02:08For outsiders, mathematics has one whopper of a puzzler: we can prove that we can't prove the consistency of all but the most trivial of mathematics. Similarly in programming, systems that do not admit apparent paradoxes are likely to be of limited use. by gafter - 2008-02-05 16:57Re: ." The puzzle has only a little to do with enums as well! The puzzle is mainly about order of static initialization, and it is an issue that was present in the earliest versions of the language. While I agree with you that the Java programming language might be better without static initialization, it would hardly be anything like the Java we know today. by ronaldtm - 2008-02-05 16:45Oh, c'mon... If you couldn't have features that cause puzzlers in a programming language, no language would exist at all! Or, at least, no one worth using, because it would be so simple you couldn't do anything interesting with it. by kirillcool - 2008-02-05 16:43. by kirillcool - 2008-02-05 16:39And so eventually it boils down to this simple example that has nothing to do with closures: public class Puzzler { public static void main(String[] args) { int colorsWithFlavors = 0; for (Colour c : Colour.values()) if (c.flavor != null) colorsWithFlavors++; System.out.println("Colours with flavours " + colorsWithFlavors); int flavorsWithColors = 0; for (Flavor f : Flavor.values()) if (f.color != null) flavorsWithColors++; System.out.println("Flavours with colours " + flavorsWithColors); } } enum Colour { BROWN(Flavor.CHOCOLATE), RED(Flavor.STRAWBERRY), WHITE(Flavor.VANILLA); final Flavor flavor; Colour(Flavor flavor) { this.flavor = flavor; } } enum Flavor { CHOCOLATE(Colour.BROWN), STRAWBERRY(Colour.RED), VANILLA(Colour.WHITE); final Colour color; Flavor(Colour color) { this.color = color; } }Undoubtedly, an obscure JLS sub-sub-sub-paragraph will shed light what happens with enums that cross-reference each other when you try to call the values() method. After using enums sparingly for the last two years, i must admit that i have never found myself in a situation when i needed cross-referring enums. Maybe it's just me. Maybe everyone else programs their entire codebase in enums... by kirillcool - 2008-02-05 16:17Part of the art of writing a puzzle is misdirection, and it appears that you've fallen for it hook, line, and sinker - i was once excited at the "one-upmanship" games during the puzzler presentations. Like i was excited following all the intricacies of multiple inheritance, operator overloading and pointers... by gafter - 2008-02-05 15:52Re: "Is this puzzler on enums or on interaction between enums and the proposed closures?" Find out for yourself. If you can't be bothered to read and understand the issue, how can you draw any competent conclusions from it? Part of the art of writing a puzzle is misdirection, and it appears that you've fallen for it hook, line, and sinker. by kirillcool - 2008-02-05 13:35Neal, without going into the details of your closure proposal, as you are clearly in a much better position to analyse its interaction with existing Java constructs, i have one question. Is this puzzler on enums or on interaction between enums and the proposed closures? If it's the later, then i stand by my words - the feature as proposed introduces a puzzler where it hasn't been before. If it's the former, why do you need to involve closures to show a puzzler on enums? Thanks, Kirill by gafter - 2008-02-05 12:15Re: "a feature implemented in a way that results in at least one puzzler is not a feature worth having in the language" In that case, when you understand the puzzle and its solution, you're likely to conclude that enums are not worth having in the language. As for the puzzles in the Java Puzzlers book, you're going to want to toss out inheritance, constructors, exceptions, arrays, threads, integers, ... well, just about everything.
https://weblogs.java.net/node/239598/atom/feed
CC-MAIN-2015-35
refinedweb
3,701
62.48
Learn. Learn.. The reason is obvious, plenty of these companies report very promising numbers, mostly as astonishing as the 97 percent of increase in conversions Trivago has seen. Why should we start developing PWAs now? In fact, in 2018 also the majority of browser vendors started backing the technology behind PWAs. Microsoft committed to bring PWAs to “more than half a billion devices running Windows 10”. Google even went as far as calling it the future of app development — no surprise that Lighthouse, Google’s tool for improving the quality of web pages, audits ‘PWA’, next to ‘SEO and ‘Accessibility’ of webapps. And even Apple has finally started to support PWAs in 2018, even though, PWAs are a clear threat to Apple’s app store business. In this tutorial we will build an income tax calculator. Why? Because calculating income tax (at least in Germany) is complicated and people would love an app that solves that problem for them. Besides that, it’s also a opportunity to explore the impact of the PWA features mentioned above. We will use VueJS for this tutorial, as it comes with a great template which makes it easy to kick off a PWA project. Another reason is, that VueJS is really easy to learn. No prior experience in any other frontend framework required! Enough theory for now, it’s time to get our hands dirty! We start-off with creating the basic setup and the file structure of our app. To speed things up, we will bootstrap the app with vue-cli. First, we need to install the vue CLI tool globally. yarn global add @vue/cli Now we can instantiate the template by vue init pwa vue-calculator-pwa We will be prompted to pick a preset — I recommend the following configuration: ? Project name vue-calculator-pwa ? Project short name: fewer than 12 characters to not be truncated on homescreens (default: same as name) vue-calculator-pwa ? Project description A simple tax calculator ? Author Fabian Hinsenkamp <[email protected]> ? Vue build runtime ? Install vue-router? No ? Use ESLint to lint your code? Yes ? Pick an ESLint preset Standard ? Setup unit tests with Karma + Mocha? No ? Setup e2e tests with Nightwatch? No For the Vue build configuration we can choose the smaller runtime option as we don’t need the compiler as the html we write inside our *.vue files is pre-compiled into JavaScript at build time. We don’t add tests here for brevity reasons. If we would set up a project for for production definitely add them. Next, run yarn to install all dependencies. To start the development mode just run yarn start. Why should we start developing PWAs now? In the project we will find files with the .vueextension. It indicates that this file is a single-file vue component. It is one of the Vue’s features. Each file consists of three types of blocks: <template>, <script> <style>. That way, we can easily divide the project into loosely-coupled components. Let’s start creating all the *.vue files our app consists off. App.vue Create the file src/App.vue. It is our main view and it will contain our different components which make up our calculator. <template> <div id="app"> <main> </main> </div> </template> <script> import Panel from "./components/Panel.vue"; import InputForm from "./components/InputForm"; import Result from "./components/Result"; import { calcTaxes } from "./calc.js"; export default { name: "app", components: { Panel, InputForm, Result } }; </script> <style lang="scss" src="./assets/styles/App.scss"/> Next let’s create the inputForm file src/components/InputForm.vue. It will handle all user inputs required to calculate the income taxes. <template> </template> <script> export default { name: "InputForm" }; </script> <style lang="scss" src="../assets/styles/InputForm.scss"/> Moreover, we create .vue skeleton files for the following Result, Panel , Input components including a style sheet named identically to the component it belongs to. All of them belong into the src/components folder. It will display the results of our calculations. The panel is a simple component that wraps the input and result components. Finally, we should remove the Hello.vue file, that comes with the vue template. Next, we add the following libraries to support sass/scss files. yarn add node-sass sass-loader -D For now, the scss files we added are all empty. Why should we start developing PWAs now? Hence, you have two options, create your own styles or checkout the following branch of the github project. git checkout 01_skeletonApp We also need logic to calculate our income tax. I use the real German income tax formular. To spare the details, I also added it to the branch. It also contains some css animations for the input validation message. In case you don’t want to use the branch above, you can also add them manually: yarn add animation.css Now we can start coding! To warm you up, we start with building the panel component. It’s good practice to keep such components generic so it can be reused holding any kind of content. That’s why we aim to pass the headline as well as the html for the body to the component. Let’s add the following code to the template section of the panel.vue file. <template> <div class="panel"> <div class="header"> <h2 class="headline">{{ headline }}</h2> </div> <div class="body"> <slot /> </div> </div> </template> For one-way data binding in VueJS, we can use textinterpolation. That’s exactly what we do to render the headline. Therefore, we simply need to wrap our headline data object in double curly braces. Attention, this “Mustache” syntax interprets data always as plain text, not HTML. That’s why we also use vue’s slot element, which allows us to render child elements of our panel component within the body element. Now we are done with the html for the panel component, next we define our script logic. <script> export default { name: "Panel", props: { headline: String, } }; </script> First, it’s important to add a name to the component so we can actually register the component and import it later on. As we want to pass the headline to the panel, we should specify it as properties. Properties are custom attributes we can register on a component. <template> <Panel class="calculator-panel" headline="Income Tax Calculator"> <template> <span>content goes here.<span/> </template> </Panel> </template> To see the panel in our app, just add the code above to our app.vue component. In the script block we already import the component so adding the html is all we need to add our first component to the app! We should see the panel when we run yarn start. If we have any problems implementing the panel or want to skip this section check the following branch. git checkout 02_panel Next we build our input form with some neat custom input validation. We have three types of inputs: regular, select and radio. Except the radio buttons these inputs need input validation and corresponding user feedback. To avoid repeating ourselves, we should build a reusable input component. To build a component which is actually reusable, I advice to build it in a way, that allows to easily extend it without changing the whole architecture. Defining a clean and thought through component api is a great starting point. In our case, we always want to control four properties from outside of the component: Let’s translate these requirements into code! The input.vue component looks like the following: <template> <div class="input-wrapper"> <label>{{label}}</label> <input class="input" v- </template> <script> export default { name: "Input", props: { type: String, label: String, validation: String, } } </script> We add some custom vue-attributes to the native input component. First we add v-if, which allows us to render the input only if we pass the correct type to our component. This is important to add different types of inputs. Next, we bind to the component’s input event with the @-prefix to a method called customInput. Thats where our custom input validation comes to play. We add a validation library to the project by running yarn add vee-validate and register the plugin in our main.js file. import VeeValidate from "vee-validate"; Vue.use(VeeValidate); Our validation consists in intercepting the native input event and then check if the entered values meet our validation rule. In case it doesn’t we set an error message. Therefore, we add two methods to the input.vue file. The customInput method is triggered when the user enters any input. <script> import { Validator } from "vee-validate"; const validator = new Validator(); ... data() { return { validationError: "" }; }, methods: { validate(value) { return validator.verify(value, this.validation, { name: this.label }); }, async customInput(value) { const { valid, errors } = await this.validate(value); if (valid) { this.validationError = ""; this.$emit("input", value); } else { this.validationError = errors[0]; this.$emit("input", ""); } } } </script> The validation error message is returned from the v-validate plugin. We only have to add some html to show it to the user: <template> ... <transition name="alert-in" enter- <p v- {{ validationError }} </p> </transition> </template> I add a transition to the error message. VueJS comes with a transition wrapper, combined with the flip-animation from animate.css and some styles, we can get a nice error message without any hassle. <template> <form> <Input type="input" label="Net Income" validation="required|numeric" v- </form> </template> <script> import Input from "./Input"; export default { name: "InputForm", components: { Input }, data() { return { inputs: { incomeValue: "" } }; } }; </script> To add the new input to the app, register the completed Input component to the InputForm.vue. Here we apply two-way-binding through v-model — It automatically picks the correct way to update the element based on the input type. Now, we need to open App.vue , import InputForm as we did with Panel and replace <span> content goes here. </span> with <InputForm/>. The result should look like what you see here on the left side. Check the branch for more details! git checkout 03_basicInput Now that we have a basic input with validation in place, it’s easy to extend our input component with the remaining two input types — the select and radio input. For the select element we use an out-off-the-box component. We simply add it by running: yarn add vue-select Before we can use it, it needs to be registered in the main.js file similar to the v-validate plugin before. However, this time we use the Vue.componentmethod. import vSelect from 'vue-select' .. Vue.component('v-select', vSelect) Now simply add the component to our Input.vue file. The options we want to show in the dropdown will be passed to the component as props. template> ... <v-select </template> <script> ... export default { ... props { ... options: Array } Now there are only the radio buttons left to add. // image We start off with the native html element. Even though we just need two radio buttons atm, I advice to build the component in a way that allows to pass an arbitrary number of inputs. Therefore, we simply use vue’s v-for attribute to loop over the options property and create a radio button for each element of the option array. <template> <div v- <input type="radio" class="radio" : <label :{{option.label}}</label> </div> ... </template> <script> ... props: { ... value: [String, Object, Boolean] } ... </script> Additionally, we need to pass the currently selected value in order to manage the ‘checked’ state of the radio button inputs. In the script block we add an array holding all possible value types. To test our new input types, we need to actually add them to the submit form and pass options to the select and radio input. inputForm.vue file in the following branch to see how the options are passed to the new inputs. It follows the same pattern we have investigated for the regular input in detail. Most importantly, keep in mind to always pass an object containing a value and a label. That’s it! We managed to create a component that allows us to add all the input types we need and validate them without repeating ourselves! Now we can finalise the input form. The only thing missing is the button to submit the form. <template> <form @submit. ... <button class='submit-btn' : Calculate! </button> </form> </template> <script> ... computed: { isEnabled: function() { return !!Object.values(this.inputs).every(Boolean); } }, methods: { input: function(input) { if (input.type === "input") { this.incomeValue = input.value; } }, handleSubmit: function() { const { isInChurch, stateOfResidence } = this.inputs; const inputValues = { ...this.inputs, yearValue: yearValue.value, isInChurch: isInChurch.value, stateOfResidence: stateOfResidence.value }; this.$emit("submitted", inputValues); } } </script> We start with preventing the default html form event, and call our custom method handleSubmit instead. There we clean our input results — Although our dropdown and radio buttons require return objects with label and value, we are only interested in the value to calculate the results. Finally, we create a custom event which emits only the values of our input options. We also create a computed property which enables our “calculate” button only after all required data is entered by the user. You find the completed input form on the branch I mentioned already above. git checkout 04_completeInputs Now we are already able to get the inputs from the user, next we need to actually calculate the resulting income taxes. As I mentioned in the very beginning, I want to spare you the details about how the different types of deductions are calculated. In case you are interested anyhow check out the calc.js file. <template> ... <InputForm @ ... </template> <script> ... import { calcTaxes } from "./calc.js"; export default { ... data() { return { calculations: {} }; }, methods: { submitted: function(input) { const calcValues = calcTaxes(input); this.calculations = { grossIncome: { label: "Gross Salary", value: calcValues.incomeValue }, tax: { label: "Income Tax", value: -calcValues.incomeTax }, churchTax: { label: "Church Tax", value: -calcValues.churchTax }, soli: { label: "Solidarity Charge", value: -calcValues.soli }, netIncome: { label: "Net Salary", value: calcValues.netIncome } }; }, } }; </script> Most importantly, we understand that we use the custom submitted event to pass the inputs back to our app.vue component. Here we also calculate the actual taxes and store the resulting values with labels, and add negative signs to the values we deduced from the gross income. — Why? It makes displaying the results very simple as we will see in the next section. Now we have everything we need to finally show the results of our tax calculations. Therefore, we use a native html table to show the label and the corresponding value in a structured manner. The implementation is quite simple, as we can stick to what we have learned about VueJS already. In fact, we repeat what we have done for the radio input already. We just pass our calculations as props to the results component and loop over our results object. <template> <div> <table> <tr class="table-head"> <th class="name">Name</th> <th class="value">Value</th> </tr> <tr v-for='result in results' : <td>{{result.label}}</td> <td class="value"> {{result.value}} </td> </tr> </table> <button v-on: Back </button> </div> </template> <script> export default { name: "Result", props: { results: Object }, methods: { handleBackClick: function() { this.$emit("clearCalculations"); } } }; </script> As users probably want to perform multiple calculations without refreshing the page, we add another button, that leads users back to the input form. Therefore, we simply emit a custom event called clearCalculations to clear our calculations property in the parent component. Finally, we are also tying all our components together and complete our income tax calculator. As always, checkout the branch I you want to have a more detailed look at the code. git checkout 05_result In this last section there is only, two things left to do — complete the data-flow and manage the lifecycle of the input and result component accordingly. <template> <div id="app"> <main> <Panel class="calculator-panel" headline="Income Tax Calculator" > <template> <transition name="alert-in" mode="out-in" enter- <InputForm v- <Result v- </transition> </template> </Panel> </main> </div> </template> <script> ... export default { ... data() { return { calculations: {} }; }, computed: { resultsCalculated: function() { return Object.keys(this.calculations).length !== 0; } }, methods: { ... clearCalculations: function() { this.calculations = {}; } } }; </script> in the Result.vue component we just added a button which emits the clearCalculation event. On the left, we see our main component App.vue. Here we subscribe to the event and reset the calculations object to be empty. Now, we want to only render the input or the result component. Therefore, we add another computed boolean property, which checks if we have calculation results or not. Now, we add v-if attributes to our components based on our resultsCalculated prop. Try it out! Now we should see the table with the results only after we have successfully entered our inputs. To make the switch between input and results less harsh we add a transition. As we are replacing the one with the other component here, we use the mode attribute out-in so that, the current element transitions out first, then when complete, the new element transitions in. We completed the tutorial! Well done! The branch with the finale application code is the following git checkout 06_complete In this tutorial, we will implement and discuss the Service Worker, one of the core technologies that adds the astonishing offline-first experience of a native app to a regular web app.: Why should we start developing PWAs now? We jump right into learning what offline-first means for app development by checking out the tax calculator app as our starting point. Why should we start developing PWAs now? In the case of our tax calculator this isn’t a big problem, otherwise one of the other caching strategies could be a better choice. To successfully implement any of the mentioned caching strategy it’s important to be familiar with the life cycle of a service worker. So let’s look at a simplified version of the actual lifecycle for now. We look at the following stages: i_nstalling, activating_ and activated. Each of these stagesmust. self.addEventListener("install", function(event) { event.waitUntil( caches .open(cacheName) .then(function(cache) { return setOfCachedUrls(cache).then(function(cachedUrls) { return Promise.all( Array.from(urlsToCacheKeys.values()).map(function(cacheKey) { ... return fetch(request).then(function(response) { ... return cleanResponse(response).then(function(responseToCache) { return cache.put(cacheKey, responseToCache); }); }); }) ); }); }) .then(function() { // Force the SW to transition from installing -> active state return self.skipWa. self.addEventListener("activate", function(event) { var setOfExpectedUrls = new Set(urlsToCacheKeys.values()); event.waitUntil( caches .open(cacheName) .then(function(cache) { return cache.keys().then(function(existingRequests) { return Promise.all( existingRequests.map(function(existingRequest) { if (!setOfExpectedUrls.has(existingRequest.url)) { return cache.delete(existingRequest); } }) ); }); }) .then(function() { return self.clients.claim(); }) ); });. self.addEventListener("fetch", function(event) { if (event.request.method === "GET") { var shouldRespond; ... if (shouldRespond) { event.respondWith( caches .open(cacheName) .then(function(cache) { return cache .match(urlsToCacheKeys.get(url)) .then(function(response) { return response; ... }); }) .catch(function(e) { console.warn( 'Couldn\'t serve response for "%s" from cache: %O', event.request.url, e ); return fetch(event.request); }) ); } } });. That’s all we need to know about Service Workers and Caching for now. We have covered the essential bits and pieces to make our app a true offline-first experience! Thanks for reading! *Originally published by Fabian Hinsenkamp at *.
https://morioh.com/p/5c248fc9ed31/build-a-progressive-web-app-in-vuejs
CC-MAIN-2019-47
refinedweb
3,191
50.43
Hi. How can I create a portable split embedded/local firebird database in the manner that is done with HSQLDB? . @Ratslinger I did misstate one detail on the embedded Firebird sdbc behavior, data changes (inserts, deletes and edits) are written to the disc image when you save the file, not when you close it, per se. (a significant difference) @drewjensen Thanks. That is what I had originally understood from various posts on Bugzilla. Hello, Hope I am correct in saying when this file is to be moved to another computer (or wherever) it is no longer accessible from the Server. If this is incorrect, please give more info on what the entire picture is you are attempting. The following is based upon the DB no longer being connected to the Firebird 3.x server. There are a couple of ways to make the DB (or copy of) portable. First is to actually turn it into an embedded DB. The embedded Firebird file is just a backup file of the .fdb DB you have. The basics of this is covered in my answer on this post → Base: HSQLDB to Firebird migration for existing databases. The other manner (just tested) is to use the SDBC connection and a Macro (a bit like portable split HSQL DB’s). You select this connection when creating the DB by selecting Firebird File after selecting Connect to an existing database. You can change an existing connection (JDBC for example) by selecting from menu on opening .odb screen Edit->Database->Connection Type.... With this you connect to a xxx.fdb DB via a file path. With the above in place the next element is to check if the current setting is in concert with the current location. In case it is not obvious, for this to be effective the .odb & .fdb need to be in the same directory. The following is the macro I used to test the process (Linux Mint 18.3 OS): Option Explicit Sub CheckLocation Dim sPathURL As String Dim sPath As String Dim sName As String Dim sPath2 As String Dim sLeft As String Dim sDB As String Dim sCheckPath As String Dim iLen2 As Integer Dim iURLLen As Integer Dim x As Integer Rem Get Document location sPathURL = ThisDatabaseDocument.URL sPath = ConvertFromURL(sPathURL) sName = ThisDatabaseDocument.Title iLen2 = InStr(sPathURL, sName) sPath2 = Left(sPathURL, iLen2-1) Rem Strip needed current source info & get fdb name sPathURL = Right(ThisDatabaseDocument.DataSource.URL,Len(ThisDatabaseDocument.DataSource.URL)-14) iURLLen = Len(sPathURL) for x = iURLLen -4 to 1 Step -1 sLeft = Left (sPathURL,x) If Right(sLeft,1) = "/" then Exit For 'Check for "\" if on Windows' Next x sDB = Right(sPathURL,iURLLen - x) Rem create what should be current location Rem Removed sCheckPath = "sdbc:firebird:" & sPath2 & sDB Rem Replaced with: sCheckPath = sPath2 & sDB Rem If current location is not same as constructed location, replace the current location If sCheckPath <> sPathURL then Rem Removed ThisDatabaseDocument.DataSource.URL = sCheckPath Rem Replaced with: ThisDatabaseDocument.DataSource.URL = "sdbc:firebird:" & sCheckPath End If End Sub This was attached to the Open Document event of the .odb file. Limited testing has been done, and again on Linux only, but have seen no problems in testing thus far. Edit: While the original routine worked, it was always making the change. Changing two lines will correct this. The above code was modified. The changes are near the end of the routine. The old is Rem’ed & the new noted. Edit 2018-08-10: In light of the comments by @drewjensen above, do not use the process above for Firebird File connection unless the commit is incorporated (see comments below question). Will update this answer when I’ve had a chance to investigate further. In the meantime, using the DB in a Firebird embedded .odb does work as I have used this for months. Edit #2 2018-08-10: Have done some initial testing. With this macro: Sub PerformCommit Dim oStatement As Object Dim sSQL As String Dim result if IsNull(Thisdatabasedocument.CurrentController.ActiveConnection) then Thisdatabasedocument.CurrentController.connect endif oStatement = Thisdatabasedocument.CurrentController.ActiveConnection.createStatement() sSQL = "commit" on error goto SQLerror result = oStatement.execute(sSQL) SQLerror: End Sub attached to the View is going to be closed event of the .odb all data seems to be saved. Note that this does not mean you can forgo the normal saving when required. This is simply to alleviate the problem noted previously. If attempting, any problems noted will be appreciated. Note: Commit problem has been fixed in newer LO versions. LO v6.3.x seems to work without problem. Fix may have been as early as v6.2.1.x . . Edit 2021-10-30: Because of LO v7.2.x tdf#144329 this sub does not work correctly. To fix, replace: sName = ThisDatabaseDocument.Title with: Rem Fix for no name in title bar GlobalScope.BasicLibraries.LoadLibrary("Tools") sName = FileNameoutofPath(sPathURL) OR can fix and place title at same time. Place: Rem Fix for no name in title bar GlobalScope.BasicLibraries.LoadLibrary("Tools") ThisDatabaseDocument.Title = FileNameoutofPath(sPathURL) after: sPathURL = ThisDatabaseDocument.URL
https://ask.libreoffice.org/t/how-can-i-create-a-portable-split-embedded-local-firebird-database-in-the-manner-that-is-done-with-hsqldb/34851
CC-MAIN-2022-05
refinedweb
841
57.98
08 March 2012 09:06 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The plant, which is located at Lin Yuan in The shutdown is not related to the current poor margins in PTA production, according to the source. “Margins are poor, but demand [remains] healthy,” the source said. The company recently restarted its 250,000 tonne/year No 2 PTA unit at the same site. The plant was idle for four months from late October last year because of unfavourable market conditions, the source added. CAPCO’s 700,000 tonne/year No 6 PTA plant at The major producer operates six lines in total, namely five in CAPCO is a joint venture between UK-based BP, which holds a 61.4% stake,
http://www.icis.com/Articles/2012/03/08/9539462/taiwans-capco-shuts-no-5-pta-plant-for-scheduled-maintenance.html
CC-MAIN-2015-06
refinedweb
122
69.41
User talk:Deco. LtU and WPCS I've posted mentions of this wiki at both the WikiProject Computer science talk page, and on LtU, which will hopefully help with recruiting :-) --Allan McInnes 12:21, 2 March 2006 (PST) - Wow, thanks! I considered posting in those forums but I was concerned that it would be a little too early (considering my article and user count). I really appreciate you getting involved. Please tell me if you have any questions, suggestions, or problems. Deco 12:27, 2 March 2006 (PST) - I couldn't think of a better way to improve both the article and user count than by getting more folks involved. I think the idea speaks for itself, even if the content is a little minimal at this point. Ultimately, I foresee a nice synergy between the content here, on Wikipedia (in the WPCS project), and on the planned LtUopedia. Seems like it makes sense to get the same folks involved in all 3 projects. --Allan McInnes 12:36, 2 March 2006 (PST) - I agree - recruiting has a difficult bootstrapping problem, as people are rightfully suspicious of getting involved with projects with minimal content, but I hope the novelty of the concept (and your endorsement :-) helps. I didn't know about LtUopedia, that's interesting. I just hope my choice of license doesn't interfere too much with exchange of content with the others. Deco 12:40, 2 March 2006 (PST) - Thanks again for your advertisement - it seems well-received, with some of the first substantive edits in the sample article and several new users. I was also surprised to see people in WPCS behind it. I appreciate it. Deco 16:07, 2 March 2006 (PST) - You're welcome! I must admit, I'm a bit surprised by the rapidity of new user sign-ups. But it's good to see that others can see the potential here. --Allan McInnes 16:18, 2 March 2006 (PST) - I've also added a link back here to the External Links section of the LP wikipedia page. Thanks for the site Deco, and thanks for the LtU tip (where I saw it) Allan! lukfugl 16:33, 2 March 2006 (MST) - Thanks! I'm not sure if I deserve an external link for a site so young and undeveloped, so I'm not sure if it'll stick, but I appreciate your adding it. Deco 16:36, 2 March 2006 (PST) - FYI, we've just revised the (draft) WikiProject Computer science "manual of style" to recommend that new sample implementations be placed here on LP, and that algorithm articles provide an external link to the corresponding LP pages (if they exist). --Allan McInnes 17:14, 2 March 2006 (PST) - I'd actually recommend they link the corresponding category, such as Category:Insertion sort, if it exists (this may be what you meant). One of my goals is to avoid the maintenance overhead of list articles as much as possible. Deco 17:20, 2 March 2006 (PST) - That's certainly what I had in mind. But in thinking about your comment, I suspect I need to make the wording clearer in our guidelines. I'll do that now. --Allan McInnes 17:30, 2 March 2006 (PST) Syntax highlighting I notice that you're considering moving the syntax highlighting defs into MediaWiki articles. Would this also make it possible to easily add definitions for new languages? --Allan McInnes 17:09, 2 March 2006 (PST) - Yup, that's the idea. The noweb tool is already language-independent for the most part - the only things that really depend on language are syntax highlighting and the license comment header appended to source files. Deco 17:12, 2 March 2006 (PST) - Great! --Allan McInnes 17:15, 2 March 2006 (PST) Brilliant idea I've been thinking of starting something similar for a while. This will be a great resource. Let's make TAOCP obsolete :-) - Fredrik 17:16, 2 March 2006 (PST) - Thanks for your reassuring comments, Fredrik. I welcome any contributions or suggestions from you. Deco 17:17, 2 March 2006 (PST) A problem that comes to mind immediately is handling of multiple programming languages. I see for example that there are already two articles on insertion sort, for C and Ruby but with essentially the same text. This is obviously going to make maintenance difficult (imagine when we have 100 languages...). Perhaps add the ability to provide any code block in multiple languages (the user chooses which one to display/download)? Fredrik 17:29, 2 March 2006 (PST) - Well, there are some subtle differences in the explanations of those articles, such as in the printing step. I'm afraid that providing "multi-language blocks" would create a situation where people avoid idiomatic code in favour of code that fits the existing prose. I think sharing information using templates could help avoid some of the redundant parts, though. Deco 17:34, 2 March 2006 (PST) An option might be to attach language-specific notes to individual code blocks. Templates may work. (In fact, for very generic topics, you could use a full article as a template and provide per-language-page code blocks as parameters.) Experimentation to be done... Fredrik 17:39, 2 March 2006 (PST) - One problem is that right now I believe I'm feeding the raw wikitext to noweb. I really should be using wikitext with all templates fully expanded. I'll look at fixing this. Deco 18:14, 2 March 2006 (PST) View code tab Hi, first let me start by saying that this is a great idea for a website. I have a suggestion; why not have a view code tab at the top of an article that will simply display the code in the browser (possibly syntax highlighted), as an alternative to downloading it. I think this would be very convenient. Mike Kucera - Thanks for your encouragement! I appreciate how annoying it can be to download an archive for every change. One approach might be to just have a "code" tab viewing code on the web, with a "Download as .zip" link at the top of the page. This seems more consistent with the general UI expectations of tabs anyway. Deco 11:28, 3 March 2006 (PST) Welcome messages Hope you don't mind me plastering "welcome" messages over the talk pages of all of the new users. I figured that it wouldn't hurt to emulate Wikipedia in that regard, and thought it would also help to build a sense of community. --Allan McInnes (talk) 18:04, 3 March 2006 (PST) - Not a problem at all. The only thing I might change is that it'd be great if you could try to incorporate specific comments about their contributions and stated interests (even if they're only template parameters!). This helps to make the welcome feel less like spam and more like personal interest in them as a unique contributor. Deco 18:07, 3 March 2006 (PST) - Good point. I pretty much just grabbed the standard WP welcome template, and reworked it for LP. Perhaps it'd be useful to have another template that allows more personalization. OTOH, it's easy enough to add a few personal comments after the standard template message. Since many of the new users haven't made any contributions (ar stated any interests) yet, it's sort of hard to personalize those welcome messages. But I agree that I probably could have done a better job on some of them. --Allan McInnes (talk) 18:25, 3 March 2006 (PST) - Yeah, that's the tricky part. Sometimes I wait a bit to see what they create. On the other hand, some might not create anything if they're not welcomed. I'm not complaining though, any welcome is better than none and especially now making new users happy is extremely important. Thanks for helping out with this. Deco 18:27, 3 March 2006 (PST) Scheme Hi Deco, thanks for the welcome message and for editing! I posted right after finding out about this wiki, so I've looked at only the bare minimum of syntax I needed at that point. That's what wikis are for! Is Scheme syntax highlighting going to be supported in the near future? In general, would it be preferrable to declare the language used even though it is not supported yet? Salimma 09:01, 4 March 2006 (PST) - Hi Salimma! Syntax highlighting expansion is something I'm looking at right now and I'll keep your request in mind. It shouldn't be too hard since we already have Lisp support. You should go ahead and declare the language, because it will work retroactively. By the way, I'd like to give Scheme warnings/errors on the code page - can you tell me which Scheme system I should set up and how to get it to process code for errors without running the code? Thanks. Deco 13:59, 4 March 2006 (PST) Format idea Would it be possible to give codeblocks a different background and border color then other preformatted text? That would be helpful in making codeblocks a little more obvious, for example in an article like Insertion sort (Haskell, simple).--Allan McInnes (talk) 12:53, 4 March 2006 (PST) - This is a good idea. I'd just have to do something with the style sheets and the HTML generated by codeblock tags. Hmmm. Deco 15:03, 4 March 2006 (PST) Odd error When bringing up the "code" page for Merge sort (Haskell) I get the following error message at the top of the page: - Warning: unlink(/tmp/litprog7642224/mergesort.hi): No such file or directory in /nr/web/literateprograms/includes/SpecialDownloadcode.php on line 340 I haven't seen this error before, so I'm assuming it's the result of one of the recent chnages you've made (perhaps involving the syntax highlighting?) --Allan McInnes (talk) 10:09, 5 March 2006 (PST) Update: Is this possibly caused by the fact that the merge sort (and insertion sort, where the error also occurs) files don't actually generate an executable? I don't observe the same problem on Hello World (Haskell) or the markov simulator. Is there an easy fix for this? Or should I just make the relevant literate programs executable (by, say, adding some test drivers? --Allan McInnes (talk) 10:15, 5 March 2006 (PST) - Yeah, there's my lack of Haskell experience showing, I was trying to fix the appearance of .hi files for the other examples. I now delete it only if it exists. Thanks for the heads up. Deco 12:27, 5 March 2006 (PST) - That seems to have fixed things. Thanks! --Allan McInnes (talk) 14:34, 5 March 2006 (PST) Advanced user templates Hi, I'm not an expert in templates, and I want to implement one having the following features: - it's a template for a cummunity of users - users can put it on their page (with argument user for instance) - they can put it on pages maintained by the community (with argument page) - the text in the right-part of the box must change according to the argument (this user is part of the community of the argument is user, and this page is maintained by the community if the argument is page) - a page (for instance :Catergory:User:Community:community-name) has to list all the users having the template in their pages - and another one (for instance :Catergory:community-name) has to list all the pages having the template in it is it easy to build? can you do it for me? or explain to me how to do this? I begun something here. thanks - Charles 6 March 2006. - You appear to be looking for something similar to Wikipedia's Wikiprojects. You probably want to simply have two different templates, one for users and one for pages. We can generate a corresponding category of pages bearing the template in the same way the language templates do, by including the category in the template. The one thing I'd change is that, in order to keep project content as reusable as possible, it's best to put this kind of template on the talk page. Wikiprojects do so on Wikipedia. - I'm going to suggest though that we come up with a standard nomenclature and metatemplates (templates used to make other templates) for this kind of project community, for consistency and so that others can easily create new communities. I suggest we create a page LiteratePrograms:Communities, and that each community have a page LiteratePrograms:Whatever Community. We could have metatemplates Template:Community member and Template:Community article, templates for each community like Template:Whatever Community member, Template:Whatever Community article, and categories like Category:Whatever Community members and Category:Whatever Community articles. I'll look into setting some of this stuff up and get back to you. Deco 03:03, 6 March 2006 (PST) - It seems that's indeed exactly what I need. - I tried to implement this using the two templates, and finally it worked (even if it took some times to be able to see all links). - the last pending point is to be able to have a box with the content of what links here into the community page - Charles 7 March 2006. - Wow, you took care of this all nicely. It's not currently possible to transclude the contents of "What links here." However, with a category page this is unnecessary, as any page containing the category (using the syntax without the : after the [[) will already be listed in the category. Hope this helps. Deco 02:05, 8 March 2006 (PST) New features request First: thank you for LP.org, it's very usefull and usable. Now I think that I always waited for a tool like this one... I have two questions about new features that could enhance LP.org: - could be usefull to have pages with the list of available codeblocks for each language, with links to containing pages - could be usefull to cite codeblocks into plain text, like this for instance <<my block name>> and have it in a special syntax (like piped words in web systems), I tried to two forms in the sandbox, those citations could be present in the upper list - could be usefull to use blocks from other pages, using something like <<:other page#block name>> and a last few questions: - do you plan to release a version of LP.org to allow people like me to install it at home or / and at work? - what are the requirements to be able to run it? - how long do you think LP.org will be online? I really think about using it for a community of scientists I belong to, but it has to be on line several years, or I have to be sure that I will be able to get our work if LP.org will be turned off. thanks, Charles, 7 March 2006. - Hi, Charles, these are all valid questions/suggestions. To answer your questions, I will be releasing the software in the form of a patch on the most recent MediaWiki server (first I have to get my ad hoc changes cleaned up and in source control). I also will be keeping LiteratePrograms up indefinitely, as I plan to always own a server and so I will always have the means of running it. Currently I have bandwidth limitations (DSL) but after I colocate I'll have bandwidth to spare. Additionally, I will make dumps of the database available for download on a regular basis. I hope some kind people will also consider mirroring my dumps. - For your suggestions, these are a bit trickier. I like the idea of automatically converting <<these>> into links to chunks. I was also considering automatically add links to API references and that sort of thing. Links from one page to another, though, raise versioning issues: if the links always refer to the current version, changes to the other page might break things, but if they refer to a single static version they won't receive bug fixes and such. I'm leaning towards a single static version, since at least you can update the version as you please. Thanks a lot for your feedback. Deco 02:29, 7 March 2006 (PST) - Follow-up: see LiteratePrograms:Downloading LiteratePrograms source code and content. Deco 04:59, 7 March 2006 (PST) user css/js hi deco, thanks for the cool idea and setting up this wiki! could you $wgAllowUserJs = true; $wgAllowUserCss = true; please? btw, ff is a bit unhappy about main.css, as the "center" in #f-poweredbyico { float: center; height: 1%; } seems to be an illegal value. -- ∂ 04:49, 7 March 2006 (PST) - Oops. Thanks for the heads up. I'm uncertain about enabling JS and CSS due to potential security risk. Could I ask what you want to use it for? Deco 04:55, 7 March 2006 (PST) - have a look at User:D/monobook.js and User:D/monobook.css - atm i just wanted to have my familiar mediawiki-look with the search box at the top, but i have a lot of ajax-style code which would be nice to have here, too. example: watching and unwatching pages with a single click without having to reload. - the security risk depends mostly on who you make an admin: only the the user the script belongs to can modify it - and any admin. the latter should imho be forbidden.. -- ∂ 06:14, 7 March 2006 (PST) New background I like the new page background! Took me a while to notice it, but I think it looks great! --Allan McInnes (talk) 16:58, 7 March 2006 (PST) - Thanks. It's made from code on the site. I had a bit of trouble making it not be so loud as to draw attention away from the important elements. If you took a while to notice it, I guess I succeeded in that. :-) Deco 16:59, 7 March 2006 (PST) Special:Upload Just noticed that the Special:Upload page doesn't have a checkbox for confirming lack of copyright infringement (which it should have), but does have a checkbox for "watch this page" (which seems odd). I can't see any obvious place to fix this within the MediaWiki namespace. Am I missing something, or is that a change that you have to make outside of the wiki system? --Allan McInnes (talk) 11:17, 9 March 2006 (PST) - As an aside, do you think it's worth having a specific "image license" description on the image page, as WP does? Or is the copyright checkbox sufficient, and everything assumed to be under MIT/X11? --Allan McInnes (talk) 11:19, 9 March 2006 (PST) - Actually I think this is a Wikipedia enhancement - I'm just using base stuff for uploads. After thinking about this for a bit, I think images are often integral to the discussion, so I can't really allow them under more restrictive licenses than MIT. Also, because MIT is rather strange when applied to non-textual content, I'm tempted to insist on public domain/free use for all images. Since most images that are useful on LP are diagrams created by the authors anyway, this seems entirely practical (plus there are many existing images in this category, see Category:Public domain at Commons). Deco 11:31, 9 March 2006 (PST) - That sounds reasonable. Thanks. --Allan McInnes (talk) 12:12, 9 March 2006 (PST) - Follow-up: I wrote a license tag for this and added it to both Special:Upload and to the top of every image page. I think this will get the message across and really simplify our whole image licensing story. Deco 12:20, 9 March 2006 (PST) - So I saw. I agree that simplifies things a lot - good idea! --Allan McInnes (talk) 12:23, 9 March 2006 (PST) Template parameters I've been experimenting with using template parameters that have default values (e.g. {{{param|default}}}) on User:Lehalle's Template:Go to the article, and failing miserably. Am I doing something wrong, or does the version of MediaWiki we're running here not support parameter defaults? --Allan McInnes (talk) 12:14, 9 March 2006 (PST) - I felt too, and I'm interested by any solution - User:Lehalle 11 March - It's possible that it's a 1.6 feature. I'd have to check. I haven't used default template parameters myself, so I'm unsure. Deco 12:16, 9 March 2006 (PST) - It appears that default parameters are a 1.6 feature. Oh well. Guess I can live without them :-) --Allan McInnes (talk) 12:22, 9 March 2006 (PST) Programming languages snippets Do you know ? --Skim 15:59, 9 March 2006 (PST) - Hi Skim. Yes, I'm familiar with the Shootout, but unfortunately I can't accept their programs without the authors' permission because they're released under the Modified BSD license, which is more restrictive than the MIT license used on LP. Thanks for the link though, and welcome again. Deco 16:52, 9 March 2006 (PST) Maybe we could come up with our own set of benchmarks? Fredrik 15:25, 11 March 2006 (PST) - Maybe, but meaningful time measurements generally requires establishing a very specific platform for the code, and I personally prefer code that's more portable. Also, benchmarks tend to focus on performance over clarity (I think?). It's not that we can't have performance oriented snippets, but I wouldn't consider them my first priority. Deco 15:28, 11 March 2006 (PST) - Optimization often involves non-obvious solutions; it is often instructive to explain such code in detail. A category of articles that describe specific optimization tricks in specific languages might seem more "encyclopedic" (if that concept is applicable), but a category of articles describing how to optimize for a particular problem in multiple languages (and a comparison of the results) would be useful as well. Fredrik 15:40, 11 March 2006 (PST) - I guess it would be instructive to take a simple program and demonstrate some optimizations. Maybe we could take an existing article for a problem like (for example) Insertion sort (C, simple) and make an Insertion sort (C, optimized) that goes through some optimizations. That way it still fits in the problem-and-implementation framework. It could exhibit timings on a few different common platforms to demonstrate improvement. Of course, we don't want to duplicate the compiler's optimizations too much. Deco 15:55, 11 March 2006 (PST) Ocaml Syntax Hi, is it possible to have ocaml highlighted syntax, please? thanks, Charles. - Hey Charles. I'll look into this, and also into highlighting for related languages like Standard ML. Deco 03:33, 17 March 2006 (PST) XML Syntax Hi, is it possible to have XML highlighted syntax, please? (sometimes xml files are used by programs as configuration file, and anyway XML could be one of the languages of LP.org) thanks, Charles. - Sure, this would be relatively straightforward, and could be an important basis for highlighting lots of XML-based languages like XSLT, HTML, and so on. Deco 08:52, 17 March 2006 (PST) - Perhaps we could set up a "requested syntax highlighting additions" noticeboard of some kind. That way other people could become aware of requests such as this one (as well as posting new ones), and you (Deco) wouldn't be the only one on the hook to implement all of these requests. --Allan McInnes (talk) 12:53, 17 March 2006 (PST) - Sure. The first place that occurs to me is Mediawiki talk:SyntaxHighlightingRegexps. I've added a section there for this purpose. Deco 15:53, 17 March 2006 (PST) Mail password broken? When I try that the system sends me a new password, I get the message "Error sending mail: error_log($IP/debug.log): failed to open stream: No such file or directory". May you correct this? Thanks. - It should have worked anyway. I fixed the error message though, thanks for the heads up. Deco 14:15, 18 March 2006 (PST) - I tried again and it does not work... - You probably entered an incorrect e-mail address during your original signup (and never received the e-mail verification mail). Please give me your e-mail and user name and I'll update it. Deco 14:37, 18 March 2006 (PST) - Thanks. They are "christophe.troestler@umh.ac.be" and "ChriS" respectively - Looks like you already have the right e-mail on your account. Are you sure it's not being blocked or placed in spam mail? I forwarded you a new password mail through my ISP - if you still don't receive this, something is definitely wrong with your mail. If it doesn't work, don't click it again, just tell me and I'll contact you in some other manner with the new password. Deco 15:32, 18 March 2006 (PST) LaTeX problem I've started getting a Failed to parse (PNG conversion failed; check for correct installation of latex, dvips, gs, and convert) error on pages that use embedded LaTeX (such as Category:Fibonacci_numbers). Perhaps a symptom of the shift to the new 1U server? --Allan McInnes (talk) 09:15, 3 April 2006 (PDT) - Yup. I already fixed it. If you spot a page that still has it left over just make an edit. Deco 09:19, 3 April 2006 (PDT) - Hmm, I see that didn't work. Odd. Deco 10:00, 3 April 2006 (PDT) - Aha. It uses AMS markup, which requires the AMS style files included in the "tetex-extra" package. Installed this package and it's happy now. Deco 10:13, 3 April 2006 (PDT) - I was going to report this problem, but you fixed it before I had a chance :-) --Allan McInnes (talk) 11:54, 3 April 2006 (PDT) View source on web The svn webserver link here seems to be broken - I get a 404 error every time I click on it. --Allan McInnes (talk) 08:26, 6 April 2006 (PDT) - Yeah, I haven't migrated that to the new server yet. I'll take a look at it as soon as I have a chance. If there's anything in particular you need I can get it for you. Deco 11:52, 6 April 2006 (PDT) - I didn't actually need anything. Just happened to click on the link, and wanted to make sure that you knew it was broken. --Allan McInnes (talk) 12:07, 6 April 2006 (PDT) CTM Wiki A bit of a heads-up for you... I brought up the possibility of using the LP engine for the CTM Wiki with some of the guys responsible for it. The syntax highlighting and literate programming capabilities you've developed seem like a good fit for what they're doing over there. The CTM Wiki folks seemed interested, so it's possible that Dominic Fox will contact you at some point (if he hasn't already) to discuss the mechanics of migrating an existing MediaWiki site to the LP engine. --Allan McInnes (talk) 12:12, 6 April 2006 (PDT) - Cool. Actually, the nice part is, if it's already a Mediawiki site the migration effort would mostly consist of adding suitable codeblock tags and noweb syntax, quite minimal, and it could be done in an incremental manner. Thanks for suggesting the idea. Deco 20:07, 6 April 2006 (PDT) Possibility to view usercount / logged in users Hi Deco I'm not sure if this is possible in mediawiki allready but I think a special page where you can se what users are logged in, or atleast a usercount (number of users logged in) page would be quite nice. Actually I think some sort of statistic simmilar to the phpbb2 forum - kind of like this: In total there is <number> user online :: <number> Registered, <number> Guest(s) Most users ever online was <number> on <date> Registered Users: <user1>, <user2> ... etc Perhaps not the most usefull suggestion but I think it would be a quite cool feature ;) Waxhead 15:05, 21 April 2006 (PDT) - This is kinda tricky, since on the web a lot of people will just close their browser window without logging off. You could guess when they're gone based on the time since they last loaded a page or whatever, but it's still a poorly-defined sort of notion. In terms of a way of finding other users who are online at the same time as you, it might be cool to have a list of users (including anonymous IPs) sorted by how recent their last edit was. Something like Special:Active editors. - In terms of actually inspecting site activity, I'm going to set up some Webalizer traffic reports, so that'll give more useful long-term info. I hope this vaguely addresses whatever you were hoping to accomplish with this suggestion. Deco 16:55, 21 April 2006 (PDT) Additions of PD code I hope this kind of "code dumping" is OK. It's not very "literate", but I believe in the eventualist approach. Sorry for not creating proper categories; I had to go offline abruptly. Fredrik 02:59, 24 April 2006 (PDT) - No problem, Fredrik, I'm thrilled to see these additions (and in a very interesting area). As a self-proclaimed inclusionist I'm happy to obtain any content that useful articles can be derived from. I'm hoping to help add some prose after reading the necessary papers. Deco 03:03, 24 April 2006 (PDT) Hi, when inserting the copyright/license note in the downloaded file, the shebang on the first line is moved down after the notice. Then it will obviously not work as intended, and the user has to either edit the downloaded file or specify the interpreter on the command line. Is it possible to make it so that if the first line starts with "!, the notice is inserted after the first line? Ahy1 05:32, 6 May 2006 (PDT) - Oops, you're correct of course. I'll make sure I fix this. Deco 17:12, 6 May 2006 (PDT) Splitting things up I was going to ask - is it possible to split code across several articles where code from one article has dependencies on other articles? Is there a way to reference the appropriate code so it gets pulled into the downloaded package? I was thinking about breaking things up, but I wasn't sure how to do that in practice. I have QR stuff semi-complete, and of course it relies on the matrix class, and the matrix class, in turn, relies on QR for eigenvalues (which I'm still working on). I had sort of resigned myself to one huge article and redirects on QR decomposition (Eiffel) etc. but if there's a better way to handle interdependent code I'm all ears. Leland McInnes 03:46, 12 May 2006 (PDT) - This is a good question that comes up continually. I'm a little nervous about the idea, for a few important reasons: - Dependencies between articles can make changes more dangerous and complicated, causing changes to one article to possibly introduce breaks in dependent articles. - Small, self-contained articles are easier to work into many different projects without significant disruption or "baggage" coming with them, unlike many libraries. - Articles that are completely self-contained are easier to understand from a pedagogical viewpoint. - The current alternative, though, which is basically forking, has its own set of problems. I think some kind of mechanism for explicitly adding and tracking dependencies of an article will become necessary, but will have to remain discouraged. There would probably be a new tag for it and a new toolbox link that says "show dependencies" that gives you dependency trees (both depends-on and required-by). - I would not advise merging articles on topics that really deserve their own separate articles for implementation reasons — although I find it hard to explain why I feel strongly about this, I think the main reason is that keeping articles at a fine granularity makes it easier to keep related articles on a single topic similar, to eliminate redundancy, and to hold an impatient reader's attention span. - Meanwhile, until I get around to implementing something appropriate, I recommend you just duplicate some of the content. For example, in Quicksort (C) I duplicated the code for insertion sort, but without any commentary, just a link and a code dump. This keeps the focus on the main topic. Deco 04:18, 12 May 2006 (PDT) Eiffel/noweb interaction Well, I guess this was bound to happen sooner or later... it seems that Eiffel uses << ... >> to represent literal arrays. Needless to say, this produces something of a conflict with noweb. I've tried to adjust the syntax highlighting regexps for Eiffel so that chunk identifiers must appear on a line by themselves (which I'm hoping literal arrays are unlikely to do). This seems to have partially worked, but hasn't solved all of the problems. The noweb processing step still detects the << ... >> and tries to treat it as a chunk. This produces errors on the download-code tab (see Real matrix (Eiffel) for an example). There's also some interference code you have to hyperlink chunks names, since that also picks up on the literal arrays. --Allan McInnes (talk) 12:05, 13 May 2006 (PDT) - I ran into this with the sort routines. My solution there was to split the manifest array over 2 lines: Eiffel ignores the line break, but noweb fails to see <<and >>on the same line and treats them as literals. It's a little bit of a kludge, especially if you have small arrays, but it might be the simplest solution. See Quicksort (Eiffel)#Testing for an example. Leland McInnes 12:24, 13 May 2006 (PDT) - I'm afraid I don't have a great solution for this - sometimes chunks can't go on a line by themselves, as in Lisp programs. I'd like to support Eiffel code dumps without modification, but it seems to be just about the only language with this problem, with the possible exception of certain rare C/C++ expressions. Is Leland's workaround sufficient for you? I'm afraid the regular expressions will only affect syntax highlighting. Deco 12:34, 13 May 2006 (PDT) - Leland's workaround is fine with me (after all, he's pretty much the only one adding Eiffel code ATM). I had a feeling there probably wouldn't be an easy fix, but thought I should at least make you aware that it is an issue. I will modify the regexps to work with the line-split workaround. --Allan McInnes (talk) 13:22, 13 May 2006 (PDT) Maintenance Today seems to have been a "maintenance day" :-) --Allan McInnes (talk) 19:58, 14 May 2006 (PDT) - So I see. :-) Adding new content is always exciting, but as on any wiki it's the maintainers who keep quality up. I appreciate your effort. I hope though that I'll find a way to help automate this kind of thing in the future though - I know as well as anyone that making hundreds of little changes to many pages is no fun. Deco 20:20, 14 May 2006 (PDT) - Ahy1 deserves some of the credit too. In fact, it was his series of edits to the articles in Category:Hello World that inspired me to fix up some of the other categories. The "maintenance day" comment was also made in referenc to your deletion of the ChucK "hello world" article - looks like all of us were doing some maintenance... --Allan McInnes (talk) 20:34, 14 May 2006 (PDT) Using LP.org technology in other wiki Hi. I'm very impressed with your job at LP.org. I'm CTO at a software development here in México and I'm trying to integrate the literate programming techniques to our dayly work. We develop mainly on Perl, Javascript, HTML and CSS and we also have MediaWiki serving as our knowledge base. My question would then be (and I hope it doesn't sound naive) "can we download/install your technology in our server for internal documentation?", it is just maybe what we're looking for!! Best regards. Julián - Hi there, Julián! I don't currently make archives of the source code available, but you can check them out using Subversion according to the instructions at LiteratePrograms:Downloading LiteratePrograms source code and content. If you need an archive of the source code just drop me a line and I'll put something up for you. My code is derived from the Mediawiki server - at the moment I'm missing a few recent updates but will integrate these soon. All the code is released under the GPL. Deco 12:57, 16 May 2006 (PDT) - Just did that and we're up and running! Thanks a lot.--15:17, 18 May 2006 (PDT) Refactoring Pages Hi, I just discovered Literateprograms.org and in a flash of mad creativity wrote a second version of Hello World (Java). Unfortunately, I think the title I gave my effort "HelloWorld Java 2" was wrong and didn't fit the naming conventions of the site. So I created "Hello World 2 (Java)" and copied the text over. Can you delete "HelloWorld Java 2" to tidy up? Thanks - Hi there. I suggest you just overwrite the original article - it was minimal and the only extra thing it had was the for loop, which arguably isn't even part of a normal Hello World program. In any case I don't particularly like the idea of assigning arbitrary numbers to articles, so if this doesn't work out I'll try to come up with a new name. Nevertheless I welcome you and your contribution, good stuff! Please stay and write some more. :-) Deco 06:49, 28 May 2006 (PDT) - I've deleted "Hello World Java 2" (incidentally, in the future it might be better to "move" - using the move tab at the top of the page - any articles you think you've misnamed). After looking through the current version of "Hello World 2 (Java)" I agree with Deco that it might be better to simply move all of that content into the original Java hello world article. Failing that, a new name (rather than a number) might be nice. Preferably something that establishes what makes this article different than than regular Java hello world. But really, I'd prefer to just see a merge. --Allan McInnes (talk) 16:24, 28 May 2006 (PDT) - Allow me to also add my welcome to Deco's! --Allan McInnes (talk) 16:25, 28 May 2006 (PDT) Thanks for the warm welcome guys. I have taken your suggestion and merged "Hello World 2 (Java)" with the original Hello World (Java). I guess that means that "Hello World 2 (Java)" can now be removed. It has lost the for loop, but I guess that's ok. Thanks also for the tip on "move", I hadn't spotted that one! Cpjobling 18:30, 28 May 2006 (PDT) - I've gone ahead and deleted "Hello World 2 (Java)". --Allan McInnes (talk) 22:00, 31 May 2006 (PDT) Matlab file for LP.org I just uploaded a Matlab file which convert any Matlab program into something that can be copied-and-pasted into LP.org. I called it Ocamaweb2. It could be usefull to have something more generic (not written in Matlab, but in an open source and multi platform language) like this. It should be parametrized for each language (according to the comment marks). I suggest to keep the comment tags the same - Lehalle - Hi Lehalle. That's a really neat idea. It could be a great starting point for conversion of existing programs, and could even be built into the server. I'm not very familiar with Matlab though, so it might take a bit for me to figure out your implementation. Thanks again. Deco 14:43, 30 May 2006 (PDT) I'm not specially proud of my code (made in 4 hours...) but if you could guess the comment tags I use in my files, it could be a good start to implement a more generic version. I promised to a lot of OCAMAWEB users to upgrade it soon (more that 2 years without a new version...), I will try to include an 'export to LP.org' feature in it. In fact, I will include a feature called 'export to another literate programming tool'. It was my initial goal about Ocamaweb: to write a program to convert any file to its literate programming version, but finally it seems easier (for Ocamaweb users) to use only one program. It works quite simply: you include tags into your comments (for matlab, it's %% or %<, but Ocamaweb has a configuration file, so for awk it is ## or #<) to mark begining of blocks of code. All comments that follow immediately the line which begin the block of code are the block comment (the comment mark is removed and they will be interpreted at text). There is also a tag for ending blocks of code (it's %>, the blocks which begin by %% automatically end at the next %%), you can find a simple example here, and on the sample page the ocamaweb2_original.m file is itself commented this way... My next step (I never performed it), should be to design a program which will automatically include Ocamaweb marks into a code... Duplicate categories We appear to have both Category:Programming language:JavaScript and Category:Programming language:Javascript (note the lower case 's'). Since the existing JS articles seem to show up in both categories I'm not even sure if this is a problem. But it does seem odd to have duplication like this. What do you think? --Allan McInnes (talk) 21:59, 31 May 2006 (PDT) - Hi Allan. Thanks for the heads-up. I just converted the incorrect one to a redirect, which seems to work in this case. Deco 22:33, 31 May 2006 (PDT) wikistats Is it possible to have a LP.org entry in the wikipedia statistics?. Lehalle - Hi Lehalle. I'm sorry is you felt misled, but LP is not affiliated with the Mediawiki Foundation or any of its projects, including Wikipedia, although there's quite a bit of crosslinking (all via external links only) and sharing of members. As such, I have no control over the Wikipedia statistics site and I don't think it'd be appropriate to list LP there, unless the Foundation decided to acquire it. Deco 13:41, 1 June 2006 (PDT) No problem, it seems that I am naive about all those wikipedia stuff... But is it possible to have such statistics? Lehalle 13:50, 1 June 2006 (PDT) - Definitely - in fact, the scripts are available for free. Only catch is that the data wouldn't be quite as interesting, because we've only been up for a few months instead of years, and we only have one site instead of a zillion. But I'll take a look at it - it would provide an interesting comparison with the Mediawiki projects. Deco 13:55, 1 June 2006 (PDT) Have you historical statistics for the hits on articles? because it seems that there have been an acceleration weeks ago. Lehalle Recent moves I was looking at all of the "Insertion sort (language X, simple)" entries earlier today, and thinking that they should be moved. I was going to bring it up with you sometime in the next few days, but it looks like you beat me to it :-) --Allan McInnes (talk) 19:12, 2 June 2006 (PDT) - Yeah, as you can remember it was the first article. I was sort of overcategorizing as a theoretical demonstration of how articles on the same topic and in the same language with different implementations could be split into different articles. In reflection I don't think this was such a useful idea, but everybody copied me. :-) Deco 19:29, 2 June 2006 (PDT) - On the same topic, the article naming discussion in LiteratePrograms:How to write an article still suggests the "Insertion sort (language X, simple)" style. Should we change that guidance too, or do you think there still may be cases where its warranted? --Allan McInnes (talk) 22:08, 2 June 2006 (PDT) - I think I'll keep it. The reasons for having separate articles for separate programming languages - because a person might know some languages and not others, might only be interested in an implementation in one language, or because different languages require fundamentally different program structure - seem like they could apply to two very different implementations in the same language as well. The benefit of a single article is that context established in the first implementation can be leveraged in the second. I think it'll have to be a case by case thing, but in the footsteps of Wikipedia I'm going to disfavor pre-emptive disambiguation. Deco 22:15, 2 June 2006 (PDT) A LP.org poster? May be we could have a LP.org poster for academics who want to put it universities. It could promote LP.org to students and other academics, with a clear desciption of the download code tab. For instance here is a french poster for wikipedia (too long, one page could be enough for a poster). - Lehalle 09:27, 9 June 2006 (PDT) - That's an interesting idea. I hadn't really thought about offline advertising. Unfortunately I'm also finding it difficult to imagine how to make literate programming visually exciting. It might help to emphasize the use of tables, section headers, and diagrams to complement the embedded code. Also, as you say, the "download code" tab should be emphasized, which is our "big thing" in the same way "edit this page" was Wikipedia's big thing. Makes me wish I were a better graphical designer. :-) Deco 09:41, 9 June 2006 (PDT) Moving/renaming categories I cannot find a "move" link in the categories. Is moving only available for articles? Ahy1 02:49, 10 June 2006 (PDT) - Afraid so. This is something that would require software changes, and I think it's on Mediawiki's bug list somewhere but not high priority. The workaround is to copy paste the content and delete the old category. Deco 02:52, 10 June 2006 (PDT) the new 'download code' button It's a good idea to have a new 'download code' button at the bottom of the pages, but it is really ugly... Could it be possible to have some with a border and quite the same aspect as the top tabs? - Lehalle - I agree that it's ugly. Maybe a border would help. I'll try it. Deco 11:50, 13 June 2006 (PDT) - may be putting it with a border, in small font, and at the right of the page could be better... Lehalle - or a very large bar at the bottom of the page, like: - Hey Lehalle. I've modified the code to make this message accessible for editing at MediaWiki:Downloadcodebottom and modified it to something a little prettier. Feel free to try any other changes you want. Deco 20:14, 14 June 2006 (PDT) Some advertising, Wikimedia This is a very neat idea. I've just posted about it on the mailing lists for Wikibooks and Wikipedia. I've also submitted a story to Slashdot, but they have a 99.9% rejection rate. [UPD: rejected - probably better for your server ;-)] Have you considered proposing this as an official Wikimedia project?--Erik Möller 20:11, 18 June 2006 (PDT) - sorry, I rollbacked this section by mistake Lehalle 05:37, 19 June 2006 (PDT) - Thanks, Erik! It can be difficult for me to get attention for the project without causing a negative reaction, and I appreciate your support. You should write some code yourself. :-) I originally proposed something somewhat like LP as a Wikimedia project, then a Wikicities project, but the interested parties didn't seem to understand the crucial differences between LP and an ordinary "programming wiki", and they were not willing to consider a license as liberal as the MIT License, which is important to me. I don't mind remaining unaffiliated, at least as long as I can afford the traffic. Deco 06:50, 19 June 2006 (PDT) - I actually agree with you on the licensing -- I put all my contributions to MediaWiki in the public domain, because I don't support the notion that freedom needs to be protected through coercion. When we launched Wikinews, I made a very strong case initially that the content has to be in the public domain, and we ended up migrating it to CC-BY (in part because of the legal complexity of the public domain). So there is a precedent in Wikimedia for projects without copyleft or the GFDL. - I probably won't find time to contribute here, but I'd be happy to help you through the process if you decide you want to take it to Wikimedia. Feel free to contact me at User:Eloquence any time.--Erik Möller 08:22, 19 June 2006 (PDT) FYI: Ryan524 has proposed a new wikimedia project called WikiCode here. And imho nits looking pretty good :). greetings--Kajk 08:34, 1 January 2007 (PST) Formatting of Code There seems to be a few problems with the processing of code from noweb. My recollection of using noweb on plain text files is that indentation if preserved so that is we have: public static void main(String[] args) { body of main method } and a chunk: <<body of main method>>= for (int i = 0; i < 10; i++) { System.out.println(i); } the result would be public static void main(String[] args) { for (int i = 0; i < 10; i++) { System.out.println(i); } } This doesn't seem to be the case with the wiki. For example in the code tangled from Basic constructs (Java), the braces don't align properly and the code looks incorrect (even though it isn't). I don't know if this is a bug or not, and I note that other authors (e.g. for Basic constructs (C)) add indentation to the chunks rather than indenting the placeholders. Cpjobling 06:16, 30 June 2006 (PDT) - The reason for this is because of how noweb performs indention of chunks - it uses a mixture of spaces and tabs depending on the length, and this renders differently in different viewers. Tabs can't be eliminated altogether due to their significance in certain languages. I think the best solution is to just copy the spaces before the chunk name - I have to make a fork of noweb to do this. Deco 09:24, 30 June 2006 (PDT) InterWiki links Are interwiki links implemented on this wiki? I often find myself linking to Wikipedia but the expected inter-wiki link Wikipedia:99 Bottles of Beer doesn't work and I end up having to use 99 Bottles of Beer. I thought this would be standard in a MediaWiki wiki! Cpjobling 07:26, 1 July 2006 (PDT) - Nope. I was considering adding an interwiki syntax for linking to Wikipedia, but currently decided against it in order to avoid creating the impression that the site is affiliated with Wikipedia. It's not a standard Mediawiki thing. Good question though. Deco 19:46, 4 July 2006 (PDT) footnotes could it be possible to have footnotes as in wikipedia? Lehalle 00:47, 18 August 2006 (PDT) - I think the <ref> syntax should also work here. If not, it probably will after the next integration of MediaWiki. Deco 14:53, 29 August 2006 (PDT) LOL - I AM AN OPEN PROXY, PLEASE BLOCK ME NOW!--85.195.119.22 19:58, 2 September 2006 (PDT)LOL Question about Median Cut I've posted a question on the Median Cut Discussion page.Davecotter 11:15, 13 September 2006 (PDT) Syntax highlighting for R is it possible to have Syntax highlighting for the R language? regards - Lehalle - Hi, Lehalle, it certainly is. Unfortunately I would be ill-equipped to do so with no knowledge of its syntax. Could you supply a reference, ideally a comprehensive standard one, for its syntax? Thanks. Deco 12:21, 7 November 2006 (PST) - This section of the R manual would seem to be what we need (if you scroll down a bit, the "expressions" covers things like reserved words). Unfortunately, I don't have the time to mess with adding another syntax right now, but I'll try to get to it at some point in the next month or two, assuming someone else doesn't beat me to it. --Allan McInnes (talk) 21:22, 8 November 2006 (PST) May be it could help to read some of the ESS-mode of emacs. Here are some (possible interesting) pieces of codes: <<semantic-r.el>>= (defvar semantic-toplevel-r-bovine-table nil "Table for use with semantic for parsing R.") (defvar semantic-r-keyword-table (semantic-flex-make-keyword-table `( ("..." . VARARGS) ("<-" . ASSIGNMENT) ("_" . ASSIGNMENT2) ("$" . DOLLAR) ("<<-" . ENVASSIGNMENT) ("try" . TRY) ("while" . WHILE) ("library" . LIBRARY) ("source" . SOURCE) ("require" . REQUIRE) ) '( ("<-" summary "assignment of closure") ("_" summary "assignment of closure") ("try" summary "try {<stmts>} [catch(<parm>) {<stmts>} ...] [finally {<stmts>}]") ("while" summary "while (<expr>) <closure>;") ("library" summary "library(packagename)") ("source" summary "source()") ("require" summary "require(packagename)") )) "Keyword table for use with semantic for parsing R.") (defvar semantic-default-r-setup nil " test .") hope that it can help; regards - Lehalle Quine-McCluskey_algorithm_(Java) - Sure. Click Download code, click zip to download all the files, extract this archive, then run: - javac *.java - java Formula input_example.txt - You will require version 1.5 of the Java 2 Platform. Tell me if you encounter any further trouble. Deco 17:48, 15 November 2006 (PST) Account renaming Thanks! I'm standardizing my Wiki account names - 'Salimma' just sounds wrong because of the capitalization. Could I get 'VerusHircus' instead? hircus 09:30, 17 November 2006 (PST) - Go ahead and sign up for the new user, redirect your old user page and talk page to that user, and then I'll reassign your old edits to the new user. Deco 13:53, 17 November 2006 (PST) Installing lpwiki Hi, I hit a snag when attempting to install the Literate Programs wiki. I grabbed the latest from the SVN, set up a database, started the wiki and went through the mediawiki installation. No problems so far. Media wiki starts up and there's a download programs link on the home page which talks to noweb. Unfortunately, when I go to login as administrator (to import the latest lpwiki database dump) I hit a snag. It seems that the lpwiki code is patched to assume installation at DocumentRoot "/" rather than somewhere else. So URLs for the Login page, presumably the other wiki pages and special pages, and redirects don't work. My installation's URL prefix is /~eechris/lpwiki and this seems to have been set properly in the mediawiki's installation properties file (I forget the exact name of the PHP variable for this). However the prefix seems to be ignored by lpwiki when generating links. Thus Login or register is rendered as rather than and the server returns a 404 error. If I navigate directly to, I get the login page which works, but the redirect omits the /~eechris/lpwiki prefix and a 404 results again! Is there a simple fix for this? I had no trouble with mediawiki, which is happy to live wherever I put it, so I wonder if there's been a slight fork here. Cpjobling 12:42, 7 February 2007 (PST) - Sorry for the trouble. I use a Mediawiki customization for short URLs that requires an Apache configuration change for URL rewriting. I ought to have documented this somewhere but I didn't. I had to change a small piece of Mediawiki code somewhere to make this work and this broke your setup. I'll find the piece of code shortly and get back to you. Deco 15:24, 11 February 2007 (PST) - The relevant change is in Title.php: =================================================================== --- Title.php (revision 1) +++ Title.php (revision 2) @@ -778,7 +778,7 @@ if ( $query == '-' ) { $query = ''; } - $url = "{$wgScript}?title={$dbkey}&{$query}"; + $url = "/{$dbkey}?{$query}"; } if ($action == 'render') - You'll need to reverse this change to use it with standard URLs. Will mail you this info as well. 131.107.0.73 16:11, 23 February 2007 (PST) - I've made changes to LocalSettings.php and Title.php that will enable LP to work easily out of the box on Apache - just comment out the set of settings near "$wgVeryShortUrls = true;" and uncomment the set of settings near "$wgVeryShortUrls = false;". I've tested both configurations and added explanatory comments. Checking in now. Deco 11:36, 24 February 2007 (PST) Images display Hi Deco, I have a problem to display the images I upload, see for instance Image:Arrangement-min-lines.png. I found that I am not the only one in this situation. What can I do? Lehalle - Hi Lehalle. This is probably a Mediawiki problem and I'll take another look at it after integrating the latest version. Sorry for the slow response. Deco 18:01, 4 May 2007 (PDT) History of LP. - Hi, sorry for the slow response. LiteratePrograms wiki has a relatively short and uninteresting history. I'm a long-time contributor to the English Wikipedia, and in particular to the articles on sorting algorithms, which had an unfortunate habit of developing long lists of implementations in many different languages, most of which were not useful to the point of the article. I moved most of them into separate articles, which were later deleted as being out of scope. I also noticed that people had an annoying habit of "fixing" code by making small changes that caused it to not work, or not even build. I wanted to exploit this inexplicable energy people have for adding implementations by creating a code wiki, but without the disadvantages of fragility and constricted layout. - Originally I tried to start a wiki at Wikia for this purpose (then Wikicities), but was denied on the basis of other existing code wikis at Wikia, and also disagreements of license (they force GFDL, which I strongly believe is not appropriate for code). So, I got the idea to exploit literate programming, which I had experimented with in the past (see for example this article on my personal web site) to deal with these issues in a clever way. I modified a Mediawiki server for the purpose and put it up on my own web server at home. I advertised it in a couple relevant places on Wikipedia and some interested people posted it to LambdaTheUltimate, whence it got most of its attention. More recently there's been a push to get it made into an official Wikimedia Foundation project and discussion on the foundation-l mailing list regarding that. Still don't know what will happen. Deco 18:15, 4 May 2007 (PDT) - See [1] for recent discussion, including a comment that "LiteratePrograms already exists, support that. Making a new wiki would be a textbook example of the NIH syndrome." (!) In my opinion it would be really good if this project could be moved to WikiMedia servers. -- Coffee2theorems 14:52, 8 August 2007 (PDT) User Question Look: this is really cool, but at one level, its *heavyweight* -isn't there some lightweight way to add this functionality into base MediaWiki so I can run a small, checked-out-of-svn code tree, and have syntax highlighting and begin to move it into a literate programming web style as-I-go? -ggm - The syntax highlighting is already integrated with Mediawiki and just needs an effective caching scheme. I plan to integrate noweb-like functionality directly into Mediawiki as well instead of externally invoking it, which will make this much more efficient, and to cache its results as well. The test compilation step is probably too heavyweight to scale, but it could be removed. Also, my scheme was intended more for small isolated programs rather than an entire tree of connected code. Deco 18:05, 4 May 2007 (PDT) Reimplementation of noweb into Mediawiki 1.10.x Hi Deco! Derek showed me how LP could be used to translate POT/PO-files - see PO-file discussion. I see from this page that you are planning to include noweb functionality directly into LP/Mediawiki - or as a Mediawiki extension? I would like to see this functionality as soon as possible, as it would lower the barriers to set up LP and make coding and translating that much easier because you can discuss context and so on using only a browser. If you need someone to test it, please send me note. Cheers, Haakon - I will, thanks for your interest. I have to complete the integration of Mediawiki 1.10.x first, but that's coming very soon. Deco 15:04, 21 May 2007 (PDT) Bug in Quine-McCluskey code? I've downloaded and tried your Quine-McCluskey code and it works perfectly except for one mysterious case. "a&b | c&!b | a&c" is not minimized, but should be minimized to "a&b | c&!b". Is this a bug? Naming issued I do not understand the nameing stuff. For Insertion Sort e.g in the top of the C page one sees other immplentation and there is an Io solution also, but on the Smalltalk page is no Io entry. some pages have a move tab at the top but not in Category:Programming languages:IO e.g I found that the correct name should be Io instead of IO. I created a new page but have not idea on how to get rid of the old one. Help would be appreciated. Regards Frido - I believe that the kinds of changes you are interested in making can only be made to Category pages if your account has administrative privileges (which regular users do not have). I have deleted Category:Programming language:IO for you. You should see only Category:Programming language:Io now. Hope that takes care of your concerns. --Allan McInnes (talk) 22:45, 20 June 2007 (PDT) Well yes, thanks. User:Frido) - No problem :-) --Allan McInnes (talk) 17:50, 22 June 2007 (PDT) Syntax highlighting with CSS? I just wanted to notice you of my comment on MediaWiki talk:SyntaxHighlightingStylesheet. --Ce 10:17, 29 June 2007 (PDT) Sub pages Hi, is it possible to activate sub page functionality on the main namespace? It would also be nice if code from sub pages of an article is included in the downloaded archive (Perhaps in a sub-directory). Ahy1 04:40, 4 August 2007 (PDT) - Sure, I can do that. You can include code from subpages currently using the reference mechanism, but I'd have to formally enable subpages to get the automatic backlinks. Will do this soon. Deco 20:52, 12 August 2007 (PDT) databasedumps Hi Deco, I think the site is excellent and was looking at the databasedumps - there may be a problem with your script as the dumps do not seem to have any content :( Thanks again (25 October, 2007) User list There appear to be a large number of machine-generated users in the user list, one of which has just been used by the current active spammer. -- Derek Ross | Talk 12:37, 9 November 2007 (PST) - I see - likely the result of an attack. Unfortunately there's no simple way to automatically distinguish them from legitimate users, so the best we can do is just manually block them all. Thanks a lot for your help in reverting spam and apologies for the slow response. :-) Deco 17:16, 27 November 2007 (PST) Problem with rbtree Hi, I have a problem with your rbtree lib, if I replace the main of "testmain.c" with this function, the program has a bug : the root (20) is not black. int main() { rbtree t = rbtree_create(); print_tree(t); int x = 10; printf("Inserting %d -> %d\n\n", x, x); rbtree_insert(t, (void*)x, (void*)x, compare_int); print_tree(t); x = 20; printf("Inserting %d -> %d\n\n", x, x); rbtree_insert(t, (void*)x, (void*)x, compare_int); print_tree(t); x = 10; printf("Deleting key %d\n\n", x); rbtree_delete(t, (void*)x, compare_int); print_tree(t); return 0; } Is it a problem of your lib or an use problem ? Thank you. Bug in "@@" interpretation It currently interprete "@@" differently between the document and its corresponding download code page. See, - : This interprete "@@" as exactly "@@" literal. - : This interprete "@@" as "@" literal. --Ans 06:02, 21 January 2008 (PST) good anti-spam suggestion Deco, I just got a note on my talk page suggesting a solution to our ongoing spam problem. To quote: - I notice you've been having problems with spambots. SNPedia used to have the same problem, but the recaptcha mediawiki extension fixed the problem quickly. cariaso. I had a look at it and I think it would actually work pretty well against spam. It only brings up a captcha for "dangerous" edits so it shouldn't impact ordinary editors much. I'd recommend that you consider it. -- Derek Ross | Talk 18:56, 11 March 2008 (PDT) - The Recaptcha extension requires a newer version - I need to upgrade Mediawiki, and think about rolling my changes into an extension while I'm at it. But I grabbed an older CAPTCHA extension and it'll do the trick for now. Deco 06:52, 8 March 2009 (MDT) - Excellent! That'll make a big difference. I was hoping that you'd taken down LP wiki to do something about the spam. Thanks! -- Derek Ross | Talk 12:12, 8 March 2009 (MDT) LP: status and plans for the future? Hello, a colleague fan of nuweb just discovered your tool and showed it to me. This is a very interesting project! Actually I started to do something similar but at a much smaller scale: A Mediawiki extension to regroup several snippets of code from a wikipage and have a way to download it directly as a file. The difference is that for now you cannot reorder the snippets but you can have several files on one wiki page. The code of the extension itself is written in LP-style, cf But all that is very modest, more a proof-of-concept than anything else. I saw you're running Mediawiki 1.6 and I saw some discussions about migrating to 1.10. I had a look at your code but I actually got the whole Mediawiki software so it's hard for me to evaluate what part is your LP code. Do you have plans to upgrade to MW 1.12? Now with the hooks provided by MW 1.12 do you thing this could become a real MW extension? My guess is that it would help a lot the adoption, just by publishing your LP extension on Mediawiki extensions page. Personally I didn't try to implement the syntax highlighting myself but I'm using the GeSHi enxtension so I got support for 89 languages for free :-) What's your own plans for the future? I'd be happy to help you. --User:PhilippeTeuwen 2008-06-10 - Sorry for the slow response. My extension is primarily based on external invocation of the noweb tool, preceded by an in-PHP parsing phase that extracts the code blocks and writes them to a noweb input file. It then inserts the license header using the correct comment syntax for the language and invokes some external compiler tools on the result to check for compiler warnings/errors, for some types of files. GeSHi is nice and wasn't around back then. I like the LP wiki concept, but don't think I'll have time to maintain and upgrade it by myself, in light of its stagnated contributions. I've seen some extensions on Commons of comparable complexity, so perhaps it is possible to implement as an extension. This would've generated more interest as a Mediawiki Foundation project, but there are licensing issues (they're very pro-GFDL, which I believe is inappropriate for code). Right now I'm primarily letting it stand for historical interest, and might consider protecting it all. Deco 06:21, 25 June 2008 (EDT) - Some time ago I wrote to the author of noweb, asking if it would be difficult to port noweb to PHP. In short, he said "no". I'm very interested in a LP as a common tool for developers, documenters and translators working together. See this discussion and example together with Derek Ross. Today, I can have Mediawiki up and running in 15 minutes at most hosting providers with PHP/MySQL, and if LP only required the same, then you would have a smash hit. It is too difficult to help out on most projects, because a contributor - especially documenters and translators - needs so many tools before they can start working. With Mediawiki/LP you only need a browser, and you are free to use the full gettext functionality for languages like Arabic and Polish, which are rather different than English. Deco, I don't understand why you care about the others being pro-GFDL. Mediawiki is GNU GPL, LP has to be GNU GPL, but the content here at does not have to be GFDL even if has to be GFDL at WikiCode - that's up to you as the site owner to decide.Haakon 18:35, 21 July 2008 (EDT) - That's a valid point - I think it's a good idea to pursue the literate programming mechanism being a pure extension. I don't currently have time to pursue this myself, because I have some higher-priority work, but it would enable it to be distributed more easily and I'll definitely think about it for the future. Deco 20:02, 12 August 2008 (EDT) - I'm happy to say Philippe and I have started working together on "Fram", a Norwegian word meaning "Forward" and an expansion of Phil's extension work. At present we are a bit behind you in functionality, but hopefully we will close the gap and move forward soon at --User:Haakon 2008-11-23 Spam problem Please see LiteratePrograms:Public forum#Spam attack --Ce 03:52, 3 September 2008 (EDT) - I've taken action about the spam now - no anonymous editing, and all edits require CAPTCHAs, just to be safe. This should eliminate the spam - we can talk about rolling it back a bit if it's too restrictive. Deco 23:43, 7 March 2009 (MST) literateprograms base dump Hi Deco, I am just trying to install lpwiki locally and it seems that the database dump you provide (dump_current.20YY-MM-DD.xml) are all truncated around 13Ko! could you help, please? In fact I would need an "empty" dump: without lpwiki articles, but with mandatory pages (to start lpwiki)... Lehalle 06:02, 21 November 2008 (EST) - Hi Lehalle - I apologize for the corrupt dumps, I feel really silly thinking it was dumping nothing all this time. I'm going to make sure a proper dump is put up soon. Deco 23:45, 7 March 2009 (MST) - I have now generated a complete dumps in the 7-Zip format. It takes only about 2 minutes to dump the current database, and it only occupies 10MB when 7-Zipped, so I've scheduled a weekly job to do it. Deco 03:59, 8 March 2009 (MDT) Updated version of Quick Hull See:
http://en.literateprograms.org/User_talk:Deco
crawl-002
refinedweb
11,952
62.17
Kirill Korotaev wrote:> This patch looks as an overkill for me.it's a standalone patch. It can be dropped. I think there is some value toit as we already agree> If you really care about things you describe, you can forbid unsharing> in cases:> > 1.> undo_list = tsk->sysvsem.undo_list;> if (undo_list)> REFUSE_UNSHARE;> 2. vma exists with vma->vm_ops == &shm_vm_ops;> 3. file opened with f_op == &shm_file_operationsand there are also the netlink sockets mq_notify.OK, so we agree that ipc namespaces cannot be unshared without extrachecks. I like the firewall approach : it's not safe, don't allow it. Whichis what the patch is doing : we can't unshare ipc namespace safely so let'sjust forbid it : if (unshare_flags & CLONE_NEWIPC) goto bad_unshare_out;simple, nop ? :)> I also dislike exec() operation for such sort of things since you can> have no executable at hands due to changed fs namespace.what do you mean ? fs namespace doesn't change bc you need it to load thenew process image/interpreter.C.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/7/11/259
CC-MAIN-2017-04
refinedweb
194
68.16
Talk:Kernel Contents Rename to AUR kernels I'd suggest to rename this page to "AUR_Kernels", updating the list (and dropping the core kernel26 entry, that anybody has installed anyway). The descriptions will need to be extended, though. The AUR list as of 2008-04-18: - kernel26-ice - The Linux Kernel and modules with gentoo-sources patchset and tuxonice support - kernel26-openvz - OpenVZ enabled Linux Kernel and modules - kernel26-xen - The Linux Kernel and modules with Xen support - kernel26fbcondecor - The Linux Kernel and modules with fbcondecor support - kernel26grsecurity - The Linux Kernel and modules, with the grsecurity patchset. - kernel26rt - The Linux Kernel and modules with Ingo Molnar's Realtime Preemption patch (see Kernel_Patches_and_Patchsets#-rt). Useful (or needed) when doing audio editing (esp. with jack-related apps). - kernel26thinkpad - The Linux Kernel and modules, with IBM Thinkpad specifc patches. - kernel26tp - Linux Kernel and modules for Thinkpad - kernel26tp_plus - Kernel supporting all important IBM/Lenovo Thinkpad features and providing tuxonice - kernel26tuxonice - kernel + tuxonice patchset - kernel26vanilla - Linux Kernel and modules - kernel26zen-git - and featureful Linux Kernel and modules based on linux-2.6 GIT tree Multiple wiki pages Why do we have so many different "compile kernel" pages? Obviously there are several different methods that parallel each other in places. For a new[er] user like myself, I found this all confusing. I finally did it the "traditional" way from my home directory. I've noticed no one page is dedicated to a single method. - Kernel_Compilation - Kernel_Compilation_without_ABS - Kernel_Compilation_without_ABS_for_New_Users I propose there be one page for "traditional from /home and /usr/src" and one for ABS/scripts/automation. With one titled "Kernel Compilation - Automated" and "Kernel Compilation - Traditional". The default search would go to Automated first with a link to Traditional in the intro. New users would be considered in both articles. Pros and Cons should be enumerated for each article as well. T1nk3r3r 14:51, 11 November 2011 (EST) - Done. Main article merged to kernels with links to sub pages. Fengchao 04:38, 18 February 2012 (EST) A single kernel compilation page See ArchWiki:Reports#Restructuring of the Kernel Compilation pages (again) for details/rationale. thestinger 23:59, 16 November 2011 (EST)) Kernel Script / Wiki Page Title / Merging I don't believe this is a suitable place to compile the kernel for a *new user*. I have successfully (and unsuccessfully) compiled kernels from /home/~ without issue. In fact I would seriously consider merging this article with the other one: Adding notes for new users to consider. The alternate article is the one I used to compile *my* first kernel, and it went off without a hitch. T1nk3r3r 10:41, 8 November 2011 (EST) - You're right, and I've changed the title of the page to match the content better. - Using the existing linux PKGBUILD is without a doubt the best way to do it and is very easy. You really only have to deal with picking the configuration options. These instructions seem a lot harder to follow than both the ABS and the "traditional" way, with the only real difference being that you don't end up with a package and you have to run the commands and compilation as root. Using a release candidate instead of the stable branch that Arch packages have been tested with also seems like a bad idea for a new user. - Useful information in the second part of the article (manual instructions) should be merged into Kernel Compilation without ABS. I think the consensus is that the automated script here duplicates the work of the PKGBUILD (and doesn't make a package), so I think it should be moved to User:Ejmarkow's namespace. - Other opinions are welcome :). thestinger 15:30, 11 November 2011 (EST) - It is unacceptable and without etiquette to alter the main title of any wiki page without first discussing it and coming to a conclusion. You decided to totally bypass this procedure. This is the second time you have done so without prior discussion. This move is without merit and is only being done for your own personal reasons, and not for the benefit of users who wish, by choice, to use an alternative method for compiling the Linux Kernel. As you know, Any working directory besides /usr/src can be chosen to compile the kernel. Hence, your description indicating "...in /usr/src" is unjustifiable. The wiki page will be moved back to it's previous title and THEN we should discuss it. I believe this is a suitable place for a *new user*, your arguments do not present a valid reason. What I do agree with is merging the 'manual' procedures (bottom half) of the wiki page in question and leaving the top (automated script portion) intact and as is. @T1nk3r3r asked, "Other than patching a kernel, what does the script do?" Please read all comments in this discussion and the wiki page in question, it's self explanatory. The instructions are *not* hard to follow at all. Is changing the value of 5 variables difficult for you, and then running a script? Your analysis is without justification. I suggest (as I have previously) we discuss this live, and log the entire discussion. @Thestinger often sees me on the #archlinux channels, and yet, never discussed his opinion or intentions (not once) in changing anything on this wiki. I find this quite strange. There is nothing more valuable than open discussion prior to making abrupt changes in a popular and valuable wiki page. Again, Arch Linux stands for flexibility and allowing the user to customize any process as they wish to do. It's about choice, using any proven alternative method that works. You are apparently attempting to abolish such freedoms. Again, I await a set place and time for a live discussion, something some of you are trying to avoid. My nick on #archlinux is "Losowski". Thank you. ejmarkow 11:00, 14 November 2011 (CET) - In summary, I suggest: (1) We keep intact the script portion in a wiki page entitled, "Kernel Compilation without ABS - Alternative Method" or "Kernel Compilation without ABS - Alternative Script Method" (2) Merge the botton second half (the manual process) with another wiki page, if you wish. Keep in mind, I'm always willing to add new featuers to the script, and many are planned in addition to what it already has. Some users even suggested a GUI (which I can easily program), however, the GUI seems like overkill. ejmarkow 12:04, 14 November 2011 (CET) Merge "Patches and Patchsets" and "Kernel compilation" is added. So this page become a more general kernel info page. Fengchao 04:48, 18 February 2012 (EST) - I don't think the merge from Kernel Compilation can be considered complete, in fact IMHO now the article should get a new title (just Kernel??) and the introduction should be adjusted. Also Kernel Compilation's subpages should be moved to the new title, and Talk:Kernel Compilation should be merged here. - However I don't know if it was better having two separate articles as it was before (in this case also remember to undo [3] and [4]). Please share your opinions. - -- Kynikos 16:50, 20 February 2012 (EST) - The kernel compilation section in this article is just a summary of these sub pages: Kernel Compilation/Arch Build System, Kernel Compilation/Traditional and Kernel Compilation/Script. I think it is too short to be one page on its own. Why should I change names of sub pages. Is it because subpage titles must have a general title article? Fengchao 21:00, 20 February 2012 )
https://wiki.archlinux.org/index.php?title=Talk:Kernel&oldid=185478
CC-MAIN-2019-30
refinedweb
1,257
54.52
How would I make a program allow the user to choose how many decimal spaces they would like to see printed in their float value? For example, the following code #include <stdio.h> int main(){ float x; x = 0.67183377; printf("%.2f\n", x); } 0.67 See printf() for the details. Use: printf("%.*f\n", n, x); where n is an int that contains the number of decimal places to be printed. Note that a float can only really hold about 6-7 decimal digits; the 8th one in the example will be largely irrelevant. #include <stdio.h> int main(void) { float x = 0.67183377; for (int n = 0; n < 10; n++) printf("%.*f\n", n, x); return(0); } Example output: 1 0.7 0.67 0.672 0.6718 0.67183 0.671834 0.6718338 0.67183375 0.671833754 The value is converted to a double before it is passed to printf() because that happens with all variadic functions. When x is changed to a double, the output is: 1 0.7 0.67 0.672 0.6718 0.67183 0.671834 0.6718338 0.67183377 0.671833770
https://codedump.io/share/zTKUgBpXVfpJ/1/how-to-let-user-decide-how-many-decimal-places-they-want-printed-in-their-float
CC-MAIN-2018-13
refinedweb
189
83.56
Convert obj to a float. Returns a new float which is the product of float and other. Returns a new float which is the sum of float and other. Returns a new float which is the difference of float and other. Returns float, negated. Returns a new float which is the result of dividing float by other. true if flt is less than other. true if flt is less than or equal to other. Returns -1, 0, or +1 depending on whether flt is less than, equal to, or greater than numeric. This is the basis for the tests in Comparable. true if flt is greater than other. true if flt is greater than or equal to other. Returns the absolute value of flt. (-34.56).abs #=> 34.56 -34.56.abs #=> 34.56 MISSING: documentation Returns the largest integer less than or equal to flt. 1.2.floor #=> 1 2.0.floor #=> 2 (-1.2).floor #=> -2 (-2.0).floor #=> -2 Returns nil, -1, or +1 depending on whether flt is finite, -infinity, or +infinity. (0.0).infinite? #=> nil (-1.0/0.0).infinite? #=> -1 (+1.0/0.0).infinite? #=> 1 Returns true if flt is an invalid IEEE floating point number. a = -1.0 #=> -1.0 a.nan? #=> false a = 0.0/0.0 #=> NaN a.nan? #=> true Rounds flt to the nearest integer. Equivalent to: def round return (self+0.5).floor if self > 0.0 return (self-0.5).ceil if self < 0.0 return 0 end 1.5.round #=> 2 (-1.5).round #=> -2 As flt is already a float, returns self. Returns a string containing a representation of self. As well as a fixed or exponential form of the number, the call may return ``NaN’’, ``Infinity’’, and ``-Infinity’’. Returns true if flt is 0.
http://www.ruby-doc.org/core/classes/Float.html
crawl-002
refinedweb
300
80.28
tag:blogger.com,1999:blog-9646667510888399062015-09-16T22:24:05.255-07:00Wanda's Tax TipsUigei in Processing for SomeHeads Up<br />For those taxpayers who will be claiming the Education Credits (form 8863)...that's the American Opportunity Tax Credit or the Lifetime Learning Credit, your returns will not be processed until mid February (you can still file, but there will be a delay in processing). Please don't confuse this with the Tuition and Fees deduction...those taxpayers will be processed normally.<br /><a class="ot-anchor" href="" rel="nofollow"></a><img src="" height="1" width="1" alt=""/>Uigei much of my mortgage interest will I be able to deduct ?<em>The Question</em><br /> <strong>How can I guesstimate how much mortgage interest I will get back?</strong> <br /><br /><em>The Answer</em><br /><span class="commentBody" data-<strong. </strong></span><br /><span class="commentBody" data-<strong>Great question Joyce...Thanks</strong></span><br /><br /><span class="commentBody" data-<em><strong>Based on some limitations, not all interest will be 100% deductable. Please check with your tax professional for a full answer to your tax scenario.</strong></em></span><img src="" height="1" width="1" alt=""/>Uigei is tight for ya, ain't it ?Have. <br />The quickest way to get a little money is to prepare your taxes and to get your refund. Don't miss your filing deadline because you didn't have all the money at one time to prepare your taxes.<br /><br />Always accurate and affordable tax preparation...always free efile and direct deposit.<br />Wanda<br /><br /><a href="mailto:wandaegreen@yahoo.com">wandaegreen@yahoo.com</a><br /><br />818.934.1580<br /><br />Visit my other blog...Los Angeles Daily Photo...<br /><br /><br /><span>><img src="" height="1" width="1" alt=""/>Uigei you know that you have to report the tips that you receive ?<span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Do<span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 3px; -webkit-border-vertical-spacing: 3px;">. Let me clarify something here. If you make over $20 in tips, you must report it to your employer, however all tips must be reported to the IRS.<;">Keeping a daily log is important. This shows due diligence in the event of an audit. <="font-family: Georgia, 'Times New Roman', serif;"><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 3px; -webkit-border-vertical-spacing: 3px;">Contact me for clarification on this topic or to ask any tax questions.</span></span><br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 3px; -webkit-border-vertical-spacing: 3px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Now is the time to start your daily log for tax year 2011, if you haven't already.<;">Always accurate and affordable tax preparation.</span></span><br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 3px; -webkit-border-vertical-spacing: 3px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Wanda E Green</span></span><br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 3px; -webkit-border-vertical-spacing: 3px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">818.934.1580</span></span><img src="" height="1" width="1" alt=""/>Uigei that letter !How many of you, right now, have somewhere in your home, that (those) letter(s) that you received from the IRS (Internal Revenue Service) and have been to afraid to open ?<br />Open that letter ! <br /.<br />Speculating that you are working with past year returns...make sure that you SIGN YOUR RETURN, before you send it in.<br />Many folks receive letters just because they haven't signed their tax return. <br />All letters received from the IRS will have the department that it is coming from and a phone number to call.<br /><br />Here is an important fact; if you have a tax liability after preparing a tax return and you are sending that return in late, you will be assessed a penalty for not sending your return in on time by tax day.<br /><br />Please look at all correspondence that you receive from the IRS. Receiving a letter from the Internal Revenue Service doesn't always mean that one is in some kind of trouble or that you owe money.<br /><br />I offer free telephone consultations. call me with your questions. If you need me to, I can call the IRS for you.<br />Always affordable and accurate tax preparation....always free e-file and direct deposit.<br /><br />Wanda E Green<br />wandaegreen@yahoo.com<br />818.934.1580<img src="" height="1" width="1" alt=""/>Uigei tax errorsThe IRS has just sent out a directive (Tip 2010-52) that points out the following errors to avoid during this tax season. Any one of these errors may impede the processing of your tax return and delay the receipt of your refund.<br /><br />Your tax professional should already be aware of the errors and should have already incorporated steps to ward off such errors. None the less we are human, and sometimes errors happen. If you are doing your taxes yourself, please be aware of the following common errors.<br /><br />1) <strong>Incorrect or missing social security numbers</strong>..<br /><br />2) <strong>Incorrect or misspelling of dependent’s last name</strong>. You must review your tax return (if you can) before it is transmitted. If you are with your preparer at the time of processing, they should be asking you to review your return, before the transmitt, or concluding of your appointment.<br /><br />3) <strong>Filing status</strong>. Make sure the correct filing status has been used for your situation. You may fall into more than one status, but your preparer should use the filing staus that results in the best benefit for you.<br /><br />4) <strong>Math errors</strong>. This usually happens when a preparer or tax payer are processing taxes by hand and not using a professional electronic system; none the less, a math error can and will delay your receipt of a refund if you are due one.<br /><br />5) <strong>Computation errors</strong>.. <br /><br />6) <strong>Incorrect bank account numbers for Direct Deposit</strong>..<br /><br />7) <strong>Forgetting to sign and date the return</strong>. Any paper return must be signed by the taxpayers. In the case of married filing jointly, both spouses must sign.<br /><br />8) <strong>Incorrect Adjusted Gross Income information</strong>. When an individual (taxpayer) transmits their own return (files electronically), they need to have a pin # (personal identification number), and inorder to sign electronically, you need to know your exact AGI (Adjusted Gross Income), from the prior tax year. This is done to ensure your identity.<br /><br />9) <strong>Claiming the Making Work Pay Tax Credit</strong>.....<strong>This is the exact wording of the</strong> <strong>IRS</strong>...<br /><br />As always....<br />Always affordable tax preparation, always free efile and direct deposit.<br />I look forward to being your tax professional.<br /><br />Your questions are always welcomed.<br /><br />Blessings Always<br />Wanda E Green (Uigei)<br /><a href="mailto:wandaegreen@yahoo.com">wandaegreen@yahoo.com</a><br />323.898.2330<img src="" height="1" width="1" alt=""/>Uigei Earned Income Tax.<br /><br /. <br /> *Qualifying children must meet age, residency, relationship, and joint return test.<br /><br /. <br /><br /.<br /><br /).<br /><br /><br />I know this information will help many, many of you. Call for assistance if you need to.<br /><br />I look forward to being your tax professional. Always affordable tax preparation, always free efile and direct deposit. <br /><br />Blessings and Prosperity<br />Wanda E. Green (Uigei)<br />323.898.2330<br /><a href="mailto:wandaegreen@yahoo.com">wandaegreen@yahoo.com</a><img src="" height="1" width="1" alt=""/>Uigei your going to receive a refundOnce getting through the harrowing act of filing one's taxes, the reward...is the refund. With the proper withholdings and the receipt of any refundable credits, one could end up with a very rather large sum of money.<br /><br /.<br /><br /. <br /><br />Just in case you didn't know. Any tax liability owed from a prior tax year, will be recaptured by any refund you are to receive. If there is any refund left, it will be surrendered to you. <br /><br />Of course you can receive your check by snail mail; however it takes longer; 21-28 days (sometimes 6-8 weeks. Some situations call for that amount of time). With direct deposit, you could receive your refund in as fast as 10-14 days.<br /><br />I look forward to being your tax professional. Always affordable tax preparation, and always free e-file and direct deposit. Call or e-mail me your tax questions.<br /><br />Blessings<br />Wanda (Ui)<br />323.898.2330 Cell<br />818.763.7813 Business<br />wandaegreen@yahoo.com<img src="" height="1" width="1" alt=""/>Uigei First Time Homebuyers Credit<strong><span style="font-size:85%;">The Worker, Homeownership and Business Assistance Act of 2009, signed into law on Nov. 6, 2009, extends and expands the first-time homebuyer credit allowed by previous Acts. </span></strong><br /><p><span style="font-size:85%;". </span></p><p><span style="font-size:85%;".</span></p><p><span style="font-size:85%;">The initial credit did not require the taxpayer to supply any documentation; hence, the fraud turned out to be tremendous! </span></p><p><strong><span style="font-size:85%;">Under the new law, </span></strong><a id="OLE_LINK9" name="OLE_LINK9"><strong><span style="font-size:85%;">an eligible taxpayer must buy, or enter into a binding contract to buy, a principal residence on or before April 30, 2010 and close on the home by June 30, 2010.</span></strong></a><strong><span style="font-size:85%;"> For qualifying purchases in 2010, taxpayers have the option of claiming the credit on either their 2009 or 2010 return. </span></strong></p><p><span style="font-size:85%;">The "First Time Homebuyers Credit " <strong><em>has</em></strong> been extended, but with a few restrictions.</span></p><ul><li><span style="font-size:85%;">Anyone applying must supply their Hud-I, Settlement Statement. showing all parties' names and signatures, property address, sales price and date of purchase. </span></li><li><span style="font-size:85%;">For purchasers of mobile homes who are unable to get a settlement statement, a copy of the executed retail sales contract showing all parties' names and signatures, property address, purchase price and date of purchase. </span></li><li><span style="font-size:85%;">For a newly constructed home where a settlement statement is not available, a copy of the certificate of occupancy showing the owner’s name, property address and date of the certificate. </span></li></ul><p><span ><span style="font-size:85%;"><strong>Meaning</strong>, you must send the document(s) in with the tax return. <strong>You must file a paper return. If approved, the credit could take from 5 - 8 weeks (or longer) for you to receive the refund.</strong> It has been suggested that the documents along with your tax return be sent via certified mail.</span></span></p><p><span style="font-size:85%;">The credit is now valued at up to $8000, and as long as you do not sell your home within the first 36 months of ownership, you may not have to pay it back. </span></p><p><span style="font-size:85%;"></span> </p><p><strong><span style="font-size:85%;">Long Term Resident Credit</span></strong></p><p><span style="font-size:85%;">How wonderful is this? Do you know that if you have lived in your home...your primary residence for any 5 consecutive years of the last eight years you may be able to claim up to $6500 of the credit. </span></p><p><span style="font-size:85%;">For long-time residents claiming the credit, the IRS recommends attaching, in addition to the documents described above, any of the following documentation of the five-consecutive-year period: </span></p><ul><li><span style="font-size:85%;">Form 1098, Mortgage Interest Statement, or substitute mortgage interest statements</span></li><li><span style="font-size:85%;">Property tax records or</span></li><li><span style="font-size:85%;">Homeowner’s insurance records</span></li></ul><p><span style="font-size:85%;">Here's where good recordkeeping comes into play. You must have 5 years of these documents ! </span></p><p><span style="font-size:85%;"></span> </p><p><span style="font-size:85%;">You know my closing. Affordable, Virtual tax preparation. Always free e-file and direct deposit.</span></p><p><span style="font-size:85%;">Call me or e-mail me your tax questions, and I will answer them on the blog.</span></p><p><span style="font-size:85%;">I look forward to being your tax professional !</span></p><p><span style="font-size:85%;"></span> </p><p><span style="font-size:85%;">Blessings and Wisdom and Prosperity</span></p><p><span style="font-size:85%;">Wanda E Green (Uigei)</span></p><p><span style="font-size:85%;">323.898.2330</span></p><p><span style="font-size:85%;">818.763.7813</span></p><p><a href="mailto:wandaegreen@yahoo.com"><span style="font-size:85%;">wandaegreen@yahoo.com</span></a></p><p><span style="font-size:85%;"></span> </p><p><span style="font-size:85%;"></span> </p><p><span style="font-size:85%;"></span> </p><p><span style="font-size:85%;"></span> </p><p><span style="font-size:85%;"></span> </p><span ><br /><span style="font-size:85%;"></span></span><img src="" height="1" width="1" alt=""/>Uigei Income Tax Credit - EITCThe earned income tax credit aka EITC is a tax credit for people who work and have less than 48,279 of earned income. The EITC is a refundable credit. That means anything left over after your tax liability has been eliminated, you may receive as a part of your refund.<br /><br />There are 7 rules that everyone must meet in order to qualify for the credit:<br /><br />1. Your adjusted gross income must be less than $43,279 ($48,279 for married filing jointly)if you have three or more qualifying children.<br /><br />* $40,295 ($45,295 for married filing jointly) if you have two qualifying children.<br /><br />* $35,463 ($40,463 for married filing jointly) if you have one qualifying child.<br /><br />* $13,440 ($18,440 for married filing jointly) if you do not have a qualifying child. <br /><br />2. You must have a valid social security number.<br /><br />3. Your filing status cannot be Married Filing Separately.<br /><br />4. You must be a U.S. citizen, or a resident alien all year.<br /><br />5. You cannot file form 2555 or form 2555-ez (relating to foreign earned income)<br /><br />6. Your investment income must be $3,100 or less.<br /><br />7. You must have earned income.<br /><br /><br />The above seven points must be met. If not you will not qualify. <br /><br /><br />There is really nothing that you need to do except have all proper information in regards to your children, or dependents that will qualify you for the credit. IE: social security numbers and full birth date information...month, day and year.<br /><br /. <br /><br />******* <br /><br /><br />Contact me for affordable tax preparation. Virtual preparation is easy and secure.<br /><br />I look forward to being your tax professional. Always affordable, and always free efile.<br /><br />Peace<br />Wanda E Green<br />323.898.2330<br />818.763.7813<br />wandaegreen@yahoo.com<img src="" height="1" width="1" alt=""/>Uigei a bond with part of your refund<span style="font-family: inherit;">Do you know that with your tax refund you can purchase up to $5000 in US Series I savings bonds, in multiples of $50? You must have direct deposit. Once the election for bonds have been made, the remaining amount will be deposited in the banking institutions of your choice. You can split your refunds into up to three accounts. This is not a bad deal at all. The return on the bonds is 5.64%. The bonds MUST be held for 1 full year before redemption can take place. If redeemed anytime between 1 and 5 years of holding you will lose 3 months of interest as a penalty. At this point you can only purchase the bonds in your name. At a later date you will be able to purchase bonds for others.</span><br /><br /).<br /><br />Call or email me with your questions.<br /><br />Virtual Tax Preparation, always at an affordable rate. Credit card and cash accepted.<br /><br />I look forward to being your tax professional.<br />Blessings<br />Wanda (Uigei)<br />818.763.7813<br />323.898.2330<br /><a href="mailto:wandaegreen@yahoo.com">wandaegreen@yahoo.com</a><br /><a href=""></a><img src="" height="1" width="1" alt=""/>Uigei use of your vehicle<strong>The question is</strong> <strong>:</strong> Can you deduct part of your car note payment for a vehicle used mostly for business? If so how much of it? <br /><br /><strong>Disclaimer :</strong> All advice is limited and general. Individual and in depth answers come with a detailed and in depth interview with a client.<br /><br /><strong>Answer</strong> <strong>:</strong>. <br /><br /><strong>Recordkeeping : </strong>You must keep accurate records. Period ! In the event of an audit, "that's what's gone' save yo' behind" ! That is as long as you have legitimate expenses.<br />Your journal or log does not have to be anything fancy, but it does have to and must show consistancy. That means that you should have daily entries. For the vehicle you must have a starting date entry for miles....Jan 1 thru Dec 31...<strong>Business Miles Commuter Miles Other Miles</strong>. <br /. <br />This year the standard mileage rate you will receive for operating your car for business use is 55 cents per mile. So if you are a truck driver, this could turn into a great deal of credit.<br /><br />I'll make sure that you receive all the credits you are entitled to. Virtual tax prepartion is a specialty of mine. Secure, accurate and affordable. Call me for additional information.<br /><br />Deep discount goin' on over here !<br /><br />Keep the questions coming.<br /><br />Blessings<br />Wanda E Green (Uigei)<br />818.763.7813<br />323.898.2330<br /><a href="mailto:wandaegreen@yahoo.com">wandaegreen@yahoo.com</a> <img src="" height="1" width="1" alt=""/>Uigei Sole ProprietorA sole proprietor is someone who owns an unincorporated business by himself or herself; also known as being self-employed. You use your social security number, or your EIN (employer identification number), to be recognized and tracked by the IRS. You may or may not have employees, but you are responsible for your own taxes.<br /><br />Self-employed individuals file a <strong>Schedule C</strong>. Some will receive a <strong>1099-MISC</strong> <strong>1040</strong> as well as their <strong>Schedule C</strong><br /><br /.<br /><br />Depending on the business, there may be some unique expenses, but for most there are the standards : <br /><ul><li>Advertising</li><li>Office expense</li><li>Contract Labor</li><li>Rents</li><li>Maintenance & repairs</li><li>Utilities</li><li>Telephone</li><li>Business vehicle expense</li><li>Tax & license</li><li>Insurance</li><li>Wages</li><li>Supplies</li><li>Legal & professional fees</li></ul>These are just a few of the possible expenses. There are many more. <br /><br />Call me to discuss your individual tax situation. I look forward to being your tax professional.<br /><br />Wanda E Green<br />818.763.7813<br />323.898.2330<br /><a href="mailto:wandaegreen@yahoo.com">wandaegreen@yahoo.com</a><br /><br />Please feel free to offer my blog information to your friends and family...<a href=""></a><img src="" height="1" width="1" alt=""/>Uigei's Tax Time<span style="font-family:times new roman;">For many, this time of year, "Tax Time" brings about pangs of <span class="blsp-spelling-error" id="SPELLING_ERROR_0">anx</span> and fear. The fear of their tax professional telling them that they <em>OWE</em> is a frightening thing. It doesn't have to be. </span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;". <em>Should be.....those who choose to claim more allowances than they have, usually find themselves in a tight situation at the end of the year. These, many times are the folks that owe.</em></span><br /><em><span style="font-family:times new roman;"></span></em><br /><span style="font-family:times new roman;">Note: Every time you start a new job, you MUST file a W-4...so 10 jobs...10 W-4s !</span><br /><em><span style="font-family:times new roman;"></span></em><br /><span style="font-family:times new roman;">Self-employed individuals need to pay their taxes quarterly. If you suspect that your tax liability is going to be more than $1000 after subtracting your <span class="blsp-spelling-error" id="SPELLING_ERROR_1">withholdings</span> and refundable credits, you may need to pay estimated taxes (other rules apply).</span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">Once you have this covered, and apply what needs to be applied, you should be in a positive tax situation at the end of the year.</span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;". </span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">Many families know that filing their taxes is a way to take advantage of the many, many refundable credits given to us from the federal government. Most of us will receive a refund. </span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">This is Wanda, many of you know me as <span class="blsp-spelling-error" id="SPELLING_ERROR_2">Uigei</span> and I offer affordable tax preparation. All states except Washington State. Contact me for any special "<em>situations"</em> you may have questions about. </span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">This has been quite a trying time for so many of us. As affordable as my rates are, I understand that many will need a....well you know. Talk to me, we can work something out. I mean that.</span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">Credit Cards (through <span class="blsp-spelling-error" id="SPELLING_ERROR_3">PayPal</span>) and Cash accepted.</span><br /><span style="font-family:times new roman;"></span><br /><span style="font-family:times new roman;">Many Blessings, and Thank you for reading. I look forward to being your tax professional. </span><br /><span style="font-family:times new roman;">Wanda (<span class="blsp-spelling-error" id="SPELLING_ERROR_4">Uigei</span>)</span><br /><span style="font-family:times new roman;">818.763.7813</span><br /><span style="font-family:times new roman;">wandaegreen@yahoo.com</span><br /><em><span style="font-family:times new roman;"></span></em><br /><em><span style="font-family:times new roman;"></span></em><img src="" height="1" width="1" alt=""/>Uigei
http://feeds.feedburner.com/WandasTaxTips
CC-MAIN-2017-30
refinedweb
3,992
58.48
This package is NOT supported by PayPal. The current PayPal Node SDK does not support the newest Javascript features. This package is intended to support the most cutting edge Javascript features. yarn add paypal-rest-api All examples in this README are using Typescript, however this module can be included in CommonJS(require) as well. See the common.js example for how to use CommonJS with this module. The most up to date configuration options can be found on the IConfigureOptions interface import { PayPalRestApi } from "../src"; const paypal = new PayPalRestApi({ client_id: "", // Your paypal client id client_secret": "", // Your paypal client secret mode: "sandbox", // "production" or "sandbox" requestOptions: { maxRetries: 2, // Sets the number of retries for 500 or Network timeout. Set to 0 to disable. retryDelay: 5000, // Microseconds to wait until next retry. 5000 = 5 seconds // Any options from the following // // }, }); It is STRONGLY recommended to use VSCode for the debugger and breakpoints. You must npm install first. use the yarn run example script and pass in any file path from the examples folder. git clone cd paypal-rest-api yarn install yarn run example examples/invoice/model/create-update-send-get.ts The repo provides some launch configurations and tasks for VsCode. Switch to the Debugger, open the example file you want to run, select the "Launch Example File" configuration and select run. There are 3 different methods to make API Calls. It is STRONGLY recommended to use the Model approach. For full examples refer to the examples folder. The modeling approach provides the most functionality. By storing the information in a model we can validate additional information before making another api call. For example, an invoice can only be deleted if it is in a DRAFT state. Using modeling we can prevent the delete api call unless the status is DRAFT. We also do not have to keep passing around ids since the information is stored on the model. import { PayPalRestApi } from "paypal-rest-api"; const paypal = new PayPalRestApi({ client_id: "YOUR_CLIENT_ID", client_secret: "YOUR_CLIENT_SECRET", mode: "sandbox", }); const invoice = new paypal.invoice({ merchant_info: { business_name: "testy", }, }); // Create an invoice, send it, delete will throw an exception before sending the api call. invoice.create() .then(() => invoice.send()) .then(() => invoice.delete()) .catch((err) => console.log(err)); All api functions are available on the models. You can access them on the api property. Each API function takes the requestretry options as an argument so you set the body property to your api payload. Each API function returns a request response. All api functions are validated via a schema. Occasionally these may fail so please submit an issue. import { PayPalRestApi } from "paypal-rest-api"; const paypal = new PayPalRestApi({ client_id: "YOUR_CLIENT_ID", client_secret: "YOUR_CLIENT_SECRET", mode: "sandbox", }); paypal.invoice.api.create({ body: { merchant_info: { business_name: "testy", }, } }) .then((response) => { return response.body.id; }) .then((id) => paypal.invoice.api.send(id)) .catch((err) => console.log(err)); If an API function does not exist or you are getting a false negative on a schema validation, you can always use the request method to directly execute an API call to an endpoint. You must specify the path, method, and more than likely the body. import { PayPalRestApi } from "../src"; const paypal = new PayPalRestApi({ client_id: "YOUR_CLIENT_ID", client_secret: "YOUR_CLIENT_SECRET", mode: "sandbox", }); paypal.client.request({ body: { merchant_info: { business_name: "testy", }, }, method: "POST", uri: "v1/invoicing/invoices/", }) .then((response) => console.log(response)) .catch((err) => console.error(err));
https://www.npmjs.com/package/paypal-rest-api
CC-MAIN-2017-47
refinedweb
556
50.12