text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Dain Sundstrom wrote: > I would assert that most web apps are simple and do not require complex > container specific configuration, so a common plan is the more > desirable choice. In the rare places where tomcat and jetty differ, we > allow a name space designated escape to the container specific elements. > > Lets make the commons stuff (80%) simple and have an escape system for > the complex (20%) installations. > Most environments will use one container or the other, not mix and match. Therefore keeping everything in the one namespace used by that container using the concepts people are familiar with is even simpler. -- Jeremy
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200508.mbox/%3C430AAA0B.1070304@apache.org%3E
CC-MAIN-2015-11
refinedweb
102
50.77
This is my first attempt at making an App with the ArcGIS Runtime SDK for Qt and App Studio. I've been loosely following QML code examples found in the ArcGIS Runtime team's sample app and Lucas Danzinger's repo on GitHub for the San Diego Brewery Explorer. The first problem I'm having is that I cannot return the attachments of a GeodatabaseFeature using the attachmentModel property. In the screenshot below I am doing an identify against a feature service point (OID 1060) that has one attachment. The identify is working but I cannot seem to fill the combobox with the list of attachment images: Another weird thing that happens here is the Latitude and Longitude (both double) values are showing up as undefined even when valid coordinates are added to the table and using the toString() method on them while building the list of fields throws an error. However, I am more concerned about getting the attachments to show up. The entire project code is attached. But from the screenshot, you can see the console log is reporting that the selectedFeature is valid as it logs the resulting JSON, but for some reason the attachment count is coming back empty. The second problem I am having is with adding attachments. Below is my form. It lets me select a file for attachment (currently only testing on Windows) and on the onAccepted() signal, the console logs the name, size and url of attachment (this comes back undefined which makes sense) by using: console.log("GDB Attachment: " + geodatabaseAttachment.name + ", " + geodatabaseAttachment.size + ", " + geodatabaseAttachment.url); And here I am getting an error that there is a problem adding the attachment. On the onApplyAttachmentEditsStatusChanged() signal I am also not able to pull the errors as I get an empty JSON object. I am probably doing some thing wrong there too: In all the code samples I've seen, a new GeodatabaseAttachment is created like this: var geodatabaseAttachment = ArcGISRuntime.createObject("GeodatabaseAttachment"); However, for some reason I am unable to import the ArcGIS.Runtime 10.26 module and I get this error:: plugin cannot be loaded for module "ArcGIS.Runtime": Cannot load library C:/Users/calebma/Applications/ArcGIS/AppStudio/bin/qml/ArcGIS/Runtime.10.26/ArcGISRuntimePlugin.dll: The specified module could not be found. import ArcGIS.Runtime 10.26 ^ In the attached project, the "Empty.qml" is the main file and the "EditWindow.qml" is the source code for the edit form. By default when you add a new sighting it tries to grab your device position, if not you can manually add a point. For testing, you can uncomment line 82 to disable using the device location. Also, this was built using App Studio with the "Toolbar and single content area" layout template if that helps. If anyone could help me out I would greatly appreciate it! Also, if anyone has any good code examples of how to add a legend with the ability to toggle layers on/off that would be a bonus! Thanks. Caleb- I'm picturing how the app would work. Are you allowing the user to select potentially several attachments to add to a newly created feature all in the form? If so, are you storing the list of attachments in some other list, and then once your user submits the form, you try to loop through the list and add all of those attachments to the feature all at once? If so, I wonder if a better workflow would be to call addAttachment each time your user selects their attachment. Then once they click submit on the form, you only are really calling apply edits. The issue right now is that you end up in a bit of a race condition because the addattachment finishes for 1, then calls apply edits, then in the mean time, the add attachment finishes for another and tries to call apply edits, but the first apply edits isn't done yet, so it gets rejected. The simplest way around this is to change when/how you are calling addAttachment so it isn't in a loop. If you for some reason need to use this approach, you will likely need to code up some JavaScript that'll make sure to not call apply edits until all of the attachments in the list are successfully added, then call apply edits once all have been successfully added. Thanks, Luke
https://community.esri.com/thread/168489-having-trouble-viewingadding-attachments
CC-MAIN-2018-43
refinedweb
735
59.74
- A aindl + 1 comment It seems haskell is broken: main = print "Hello World" Your Output (stdout) "Hello World" Expected Output Hello World abhiranjan + 1 comment putStrLn. You should not print ". - A aindl + 1 comment thanks! Than it would make sense to fix and use putStrLn there as well, because it is confusing. abhiranjan + 1 comment - SP sakthipitchaiah + 0 comments Scala Code: def f() = println("Hello World") - Q qxzsilver + 0 comments beginner OCaml student here, I'm teaching it myself. I don't seem to have problem defining functions for variables (in the classic OO-sense), but I am having problem with stdin/stdout in OCaml. I try to define inputString as read_line, then output the string literal "Hello, world" and then the inputString, but only "Hello, world" is printing. I'm not too sure what is going on. subhashmantha + 0 comments object Solution{ def f():Any = {println("Hello World")} def main(args: Array[String]) { f() } } does not work an i do not understand why? orubel + 1 comment Why isn't Groovy an option? Groovy is a functional programming language. Why isn't it in this category? abhiranjan + 0 comments - A abhiranjan + 0 comments Hi @avaian3, Racket (formerly known as PLT Scheme) will be another/better option. Though it depends on which course are you pursuing. If you are into SICP, then absolutely go for racket. kofiowusu + 2 comments when i compile my code and i get this feedback"Sorry :( You didn't clear the samples test cases". Is there anything i can do please? dheerajHackerRank Admin + 0 comments krishnaisdinesh + 0 comments Checking answer is case sensitive , so match your anser with expected o/p har777007 + 2 comments Add F# and Lisp too. F# because its now a first class language for MS and Lisp because.....well its Lisp !! PRASHANTB1984 + 0 comments F# might be at a later stage but we'll enable lisp soon. However, we won't be able to provide lisp tempates, since none of us know Lisp. Let us know if you would you like to contribute templates? PRASHANTB1984 + 0 comments Okay. We will enable OCaml as well, this week, for all the languages. However, we don't have OCaml programmers on board here, because of which we can't create templates - one reason we didn't enable OCaml. Would you like to contribute OCaml templates to help get other users started? Sort 25 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/fp-hello-world/forum
CC-MAIN-2018-26
refinedweb
407
65.93
Buy Cheap Corel Ventura 10 370_VMware_Tools_03.qxd101206642 PMPage value of a the device resource variable. If you do Powers off and at namespace, shortcuts can easily be value to identify hosts and VirtualCenter. It has an by an XML described in a the central focus VMware VI Web contain the VirtualMachineGroup, discussed can be the type associated the VI SDK as well the script with. This interface is 52Table 3.8 continuedVMwareVmPerlVM redo log to metrics and coun con guration compile again. server is_connectedUsed get_last_errorReturns an array reference to the. Both host and the Buy Cheap Corel Ventura 10 of Methods Method Description vm get_capabilitiesReturns the permission of the on, such as for the ventura by the passed. 10 corel buy ventura cheap due to 52Table 3.8 continuedVMwareVmPerlVM The recommended approach ifications when running virtual machine registered not valid methods virtual machine. vm set_runas_userSets the nd it buy browse to the in a hierarchical run under the that only one the SDK 1.1, your project is. This method applies assemblies, or arrays, a particular ESX. Prepare the VI see httpsupport.microsoft.comdefault.aspxscidkben us326790. Table 3.12 Data example, we will the Virtual Infrastructure and composition of the VI that only one the Web service, VMware communication with with the object. The sample application included with the a pending question if you will, that contain possible answers to virtual machine. ventura the Output has overhauled the browse to the hierarchy of focus to managing cheap groups le to buy located.To reduce managing individual resources on VI SDK 2.0 are more of new infrastructure ject, if you have created migration, corel provisioning mean that the appro priately for the of what VMware host they are maximize their cheap You must pass a reference Web service, we to target for Web Buy Cheap Corel Ventura 10 source. To disable the use of ACLs, Method Description question to target for. download lynda.com - photoshop cs4 for the web Due to the 5 Portlet Catalog and Lotus focus on the portal resource for this request to portal project to. The results and driven be supported by to scale and test, an integration, center for full at a calculated disaster Buy Microsoft Windows Web Server 2008 R2 (64 bit) (en) performance. In the case virtual portal 10 corel cheap ventura buy of test data the CICS, CIS, 10 SSO for portal, a phased framework that only must. CA HyPerformix The HyPerformix solution layer but deliver business insight deploy artifacts belonging. buy oem capture one pro 6 mac exploring Stream.getf4f. buy 10 corel cheap ventura false. FMLE true do buy 0. this is optional.. s.liveEvent Client, streamObj. Menú Usuario discount - lynda.com - excel 2013 essential training Our approach does applications that ventura to MapReduce and storage can simply many types cheap is limited. We are also hard ware resources of the le with a framework will be allowing the application that requires the spawning of an categorization of cloud. This analysis is endeavored to provide ability to acquire ing SAGA HPC resource generally Science Institute, Edinburgh the distribution buy layer, and Buy Cheap Corel Ventura 10 10 controls the endeavored to Buy Cheap Corel Ventura 10 Use for network software as a HPC resource generally ensemble members remains is9 Application Level does data management. In contrast to nor Packt some of the benets Azure operates distributors will be fabric controller, e.g., any damages need to manage Azure beneting from runtime requirements226 S.
http://www.musicogomis.es/buy-cheap-corel-ventura-10/
CC-MAIN-2015-06
refinedweb
583
55.03
Templetor: The web.py templating system Other languages : français | ... Summary - Introduction - Using the template system - Syntax - Other Statements - Builtins and globals - Security - Upgrading from web.py 0.2 templates Introduction The web.py template language, called Templetor is designed to bring the power of Python to templates. Instead of inventing new syntax for templates, it re-uses python syntax. If you know the Python programming language, you will be at home. Templetor intentionally limits variable access within a template. A user has access to the variables passed into the template and some builtin python functions. This allows untrusted users to write templates, and not worry about them causing harm to the running system. You can, of course, increase the global variables available, but more on this later. Here is a simple template: $def with (name) Hello $name! The first line says that the template is defined with one argument called name. $name in the second line will be replaced with the value of name when the template is rendered. Using the template system The most common way of rendering templates is this: render = web.template.render('templates') print render.hello('world') The render function takes the template root as argument. render.hello(..) calls the template hello.html with the given arguments. In fact, it looks for the files matching hello.* in the template root and picks the first matching file. However you can also create template from a file using frender. hello = web.template.frender('templates/hello.html') print hello('world') And if you have the template as a string: template = "$def with (name)\nHello $name" hello = web.template.Template(template) print hello('world') Syntax Expression Substitution Special character $ is used to specify python expressions. Expression can be enclosed in () or {} for explicit grouping. Look, a $string. Hark, an ${arbitrary + expression}. Gawk, a $dictionary[key].function('argument'). Cool, a $(limit)ing. Assignments Sometimes you may want to define new variables and re-assign some variables. $ bug = get_bug(id) <h1>$bug.title</h1> <div> $bug.description <div> Notice the space after $ in the assignment. It is required to differentiate assignment from expression substitution. Filtering By default, Templetor uses web.websafe filter to do HTML-encoding. >>> render.hello("1 < 2") "Hello 1 < 2" To turnoff filter use : after $. For example: The following will not be html escaped. $:form.render() Newline suppression Newline can be suppressed by adding \ character at the end of line. If you put a backslash \ at the end of a line \ (like these) \ then there will be no newline. Escaping $ Use $$ to get $ in the output. Can you lend me $$50? $# is used as comment indicator. Anything starting with $# till end of the line is ignored. $# this is a comment Hello $name.title()! $# display the name in title case Control Structures The template system supports for, while, if, elif and else. Just like in python, body of the statement is indented. $for i in range(10): I like $i $for i in range(10): I like $i $while a: hello $a.pop() $if times > max: Stop! In the name of love. $else: Keep on, you can do it. The for loop sets a number of variables available within the loop: loop.index: the iteration of the loop (1-indexed) loop.index0: the iteration of the loop (0-indexed) loop.first: True if first iteration loop.last: True if last iteration loop.odd: True if an odd iteration loop.even: True if an even iteration loop.parity: "odd" or "even" depending on which is true loop.parent: the loop above this in nested loops Sometimes these can be very handy. <table> $for c in ["a", "b", "c", "d"]: <tr class="$loop.parity"> <td>$loop.index</td> <td>$c</td> </tr> </table> Other Statements def You can define a new template function using $def. Keyword arguments are also supported. $def say_hello(name='world'): Hello $name! $say_hello('web.py') $say_hello() Another example: $def tr(values): <tr> $for v in values: <td>$v</td> </tr> $def table(rows): <table> $for row in rows: $:row </table> $ data = [['a', 'b', 'c'], [1, 2, 3], [2, 4, 6], [3, 6, 9] ] $:table([tr(d) for d in data]) code Arbitrary python code can be written using the code block. $code: x = "you can write any python code here" y = x.title() z = len(x + y) def limit(s, width=10): """limits a string to the given width""" if len(s) >= width: return s[:width] + "..." else: return s And we are back to template. The variables defined in the code block can be used here. For example, $limit(x) var The var block can be used to define additional properties in the template result. $def with (title, body) $var title: $title $var content_type: text/html <div id="body"> $body </div> The result of the above template can be used as follows: >>> out = render.page('hello', 'hello world') >>> out.title u'hello' >>> out.content_type u'text/html' >>> str(out) '\n\n<div>\nhello world\n</div>\n' builtins and globals Just like any Python function, template can also access builtins along with its arguments and local variables. Some common builtin functions like range, min, max etc. and boolean values True and False are made available to all the templates. Apart from the builtins, application specific globals can be specified to make them accessible in all the templates. Globals can be specified as an argument to web.template.render. import web import markdown globals = {'markdown': markdown.markdown} render = web.template.render('templates', globals=globals) Builtins that are exposed in the templates can be controlled too. # disable all builtins render = web.template.render('templates', builtins={}) Security One of the design goals of Templetor is to allow untrusted users to write templates. To make the template execution safe, the following are not allowed in the templates. - Unsafe statements like import, execetc. - Accessing attributes starting with _ - Unsafe builtins like open, getattr, setattretc. SecurityException is raised if your template uses any of these. Upgrading from web.py 0.2 templates The new implementation is mostly compatible with the earlier implementation. However some cases might not work because of the following reasons. - Template output is always storage like TemplateResultobject, however converting it to unicodeor strgives the result as unicode/string. Reassigning a global value will not work. The following will not work if x is a global. $ x = x + 1 The following are still supported but not preferred. - Using \$for escaping dollar. Use $$instead. - Modifying web.template.Template.globals. pass globals to web.template.renderas argument instead.
http://webpy.org/docs/0.3/templetor
CC-MAIN-2017-17
refinedweb
1,084
61.33
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */ The SSL Stream is an output stream which knows how to write to a SSL socket layer, for example provided by the OpenSSL Library. It is a libwww transport and may be registered using the Transport Manager. The application can initialize this stream together with the HTSSLReader stream, for example. This module is implemented by HTSSLWriter.c, and it is a part of the W3C Sample Code Library. The module is contributed by Olga Antropova #ifndef HTSSLWRITE_H #define HTSSLWRITE_H #include "HTIOStream.h" extern HTOutput_new HTSSLWriter_new; extern BOOL HTSSLWriter_set (HTOutputStream * me, HTNet * net, HTChannel * ch, void * param, int mode); #endif /* HTSSLWRITE_H */
http://www.w3.org/Library/src/SSL/HTSSLWriter.html
CC-MAIN-2015-18
refinedweb
115
67.15
eventually, not sure when though as i am in no rush as there are a few updates planned first. And if you message me i can organize a free steam key. PixelForest Creator of Recent community posts it saves a screenshot based on how many grid sizes you have, check in your map folder for it. If you want something animated, you need to use another app such as Fraps, OBS or Shadowplay Yeah sorry, i dont know how to fix that just yet, as its just doing its job of protecting you from malicious code. Ill have a chat to someone i know that is more clued up on this to see what we can do. Thanks for the suggestions. Im a long time Unity and unreal user, and have to say that there are some really powerful tools that already exists, such as Gaia, Vegetation Studio and map magic to name just a few. FlowScape although easy to use is a long way off from competing with these tools as they also have optimisation a big part of them. but who knows what the future holds. maybe it will happen with enough interest Hey there Yeah water has a fresnel on the reflection which makes it less reflective when looking down it. i have planned to add a fresnel slider so you can cheat a bit and get what you need. As for the fog, thats a tough one, as the way real time engines do reflection, it may not be able to be affected by fog, ill look into it. import and other stuff coming soon. thanks for being a part of it :) Sorry you cant get in, but the discord is set up and run by our users, i dont really use it much and just copy paste the links they give me. Add me on Discord Pixel Forest#4545 and we will either get it going or give you a refund cheers I have uploaded another v 1.21 hopefully this works. Can you please try? The issue is that Windows doesn’t work with UNIX permissions, so it messes them up hence why you had to or still have to do the chmod trick itch will show anything between 1 and 2GB as 1GB on their store If you are having trouble downloading, use the itch app as that has resumable downloads Are you running the mac windows or linux version? could you also post your system specs please? sorry about that, but it's out of my hands as I have no control over how itch does it's downloads If you have any questions, or just want to chat to other users come and join. It's a pretty friendly place. if you go to terrain, there's a texture button top left. Then inside that is terrain size hmm not sure, i added it as a request from a few users, but i am not able to do much with the plugin myself except initialize it, as its released by Nvidia saving should be almost instantaneous now, but if you liked that chime i can add it back in Sadly no, too many things are interconnected for painting on a flat landscape that making it for a globe would be massive amounts of work Ahhh thats really sad to hear you guys are having trouble. Have you tried the solution from this post? it has to open several gigabytes of textures, could you make sure you leave it open for long enough? or try lower quality settings maybe? let me know how you get on >>>Objects no longer snap to world center before placing. This one is a huge problem for me. =( Can you elaborate on that? Hi, try the scroll wheel if you dont see the blue brush sphere Ive also updated the help section, have a look in there
https://itch.io/profile/pixelforest
CC-MAIN-2019-22
refinedweb
650
75.13
perlmeditation BUU This is a mostly hypothetical question, so treat it as such. Just recently I was pondering the idea of writing my own IRC-bot type module, that would of course, provide a framework to connect to irc and the usual stuff people implement in modules like these. <br><br>Then I thought, "Perhaps I might actually write something useful and other people might want to use it.. so I might want to upload it to CPAN!". Of course, if I want to upload it to CPAN I need to have a unique descriptive name. <br><br>Obviously there already exists a moduled named Net::IRC, so I can't call it that, nut it does the same thing (or similar things) as Net::IRC, so what *other* name could I give it? I suppose I could go with Net::IRC2 or something, but that just feels wrong. <br><br>So I came up with the idea of calling it BUU::IRC or something similar. The idea of course is to be descriptive and unique but I thought this might not be a good idea, so I extend my question to you people. Modules with a person's "name" in the title, good or bad?<br><br> I will note in passing that there are lots of modules that have a proper name in the title, but mostly these are named after systems of some sort, most notably POE has it's own namespace.
https://www.perlmonks.org/?node_id=391394;displaytype=xml
CC-MAIN-2020-50
refinedweb
245
75.74
3D Models and Hotspots Hi guys. I'm new to QML and having a few issues that I hope you guys can help me with. I have a 3D model in a .dae file that I am trying to display. I can display it in Qt 5.4 by: import QtQuick 2.0 import Qt3D 2.0 Rectangle { width: 1140 height: 700 color: "white" Viewport{ id: viewport anchors.fill: parent camera: Camera { eye: Qt.vector3d (400.0, 100.0, -400.0) fieldOfView: 90 } Item3D{ id: satellite mesh: Mesh{source: "ow.dae"} position: Qt.vector3d(0, -0, 0) } light: Light{ position: Qt.vector3d(1000, -1000, -1000) }}} I have tried to display the same model in Qt 5.7 but all my attempts have been unsuccessful. Would any of you guys happen to know how to do it? Also, I would like to put hotspots on the model, that display information about a particular point on the model. However the only text I can find online, mentioning hotspots, is outdated. Is it possible to put hotspots on a model and if so, how can it be done in either 5.4 or 5.7? Thanks for your help.
https://forum.qt.io/topic/71676/3d-models-and-hotspots
CC-MAIN-2017-34
refinedweb
196
87.82
If your environment is virtualized in Microsoft Azure, it's quite easy to install Avast for Business on your virtual machines. Here's how it works from a high level. (You will need Avast for Business Endpoint Security or Premium Endpoint Security to install on servers, whether they are physical or virtual.) This tutorial assumes that your virtual machines are housed in Microsoft Azure and have access to the Internet. Note the unique machine names that I've used for each of the VM's in this example. In Azure virtual machines that were created after 9/18/2014, according to Microsoft, they are assigned a UUID (unique identifier) that can be read using platform BIOS commands. Avast for Business utilizes this unique identifier when installed on Azure virtual machines in order to show the VM's as separate devices in the console, and it works beautifully. To verify that each of your VM's has a different UUID, run this command in elevated PowerShell: $computerSystemProduct = Get-WmiObject -class Win32_ComputerSystemProduct -namespace root\CIMV2 The output of this command will include the unique identifier. It's easy to add new machines to Avast for Business, and you'll do it the same way as if the machines were on-premise. The main difference is that in this case, you may want to select "Full Installer" to save your VM from having to download it on its own and may make things easier/cheaper. Using your RDP application of choice, copy and paste the installer to each virtual machine and run the installer you downloaded in the previous step. IMPORTANT: There are many other ways to install Avast for Business, including deploying through GPO and silent/scripted installs. For purposes of this demo, we're doing it the simple way to show how Azure handles multiple VM's in our cloud dashboard. Click here for more info on how to deploy through GPO: Once the installer has completed, within a very short time (usually just a few minutes or even less) the machines will be visible in the cloud dashboard. An admin will have to "activate" the licenses for these devices and upon approval, they'll be identified as active and safe. At this point, Avast for Business is now set up and protecting your Azure VM's, but there are some optional settings you can configure as well. The Avast for Business cloud console allows you to group devices however you wish. In this case, I've chosen to create a "VM's" group, as I also have 3 machines on-premise. To do this, click the Group button, then drag the machines to the group you have chosen. You can also apply a separate settings template for this group:
https://community.spiceworks.com/how_to/132150-install-avast-for-business-client-on-microsoft-azure-vm-s
CC-MAIN-2017-13
refinedweb
459
57.71
Data package for Skyfield Project description Data files for Skyfield Rationale Skyfield is a Python library for astronomical computations. It depends on various data files to accurately compute moon phases, planet positions, etc. Several issues are raised by these data files: - If they're not found in the path of the Loader, they're downloaded at runtime. Depending on the archive you're requesting, some files might be very large, causing a long delay (directly related to your network bandwidth). In the case of a web server app, you'd cause a timeout on client's end. - They come mainly from 3 sources: the USNO (US Navy), Paris (Meudon) Observatory, and NASA JPL. If one of them is temporarily unavailable, you couldn't perform any computation. - In some countries, or behind some filtering proxies, the USNO is considered as a military website, and thus is blocked. - These files have an expiration date (in a more or less distant future). As a consequence, even if the files are already downloaded in the right path, at each runtime you could possibly have to download one or more files before making any computation using them. Currently known expiration dates Goal for this project - Providing at least the most common of these assets in Python Package. - Make regular releases to refresh the files before they expire. - Provide a warning / logging mechanism when the files are about to expire (or when they are outdated) to still allow you to compute things with the loaded assets, but being informed you need to upgrade. This way, you could install or upgrade this data package via pip. Once all the files are on your disk space, you can instantiate your skyfield loader pointing at their path, without having to worry about anything. Usage Install the packages using: pip install skyfield skyfield-data To create a custom Skyfield loader, use the following code: from skyfield_data import get_skyfield_data_path from skyfield.api import Loader load = Loader(get_skyfield_data_path()) planets = load('de421.bsp') # this command won't download this file ts = load.timescale() # this command won't download the deltat + Leap Second files If you want to make sure that the data files would never be downloaded, you can also use the expire option like this: load = Loader(get_skyfield_data_path(), expire=False) Whenever a file contained in the catalog has expired, you're going to receive a warning when loading the skyfield-data path: >>> from skyfield_data import get_skyfield_data_path >>> from skyfield.api import Loader >>> load = Loader(get_skyfield_data_path()) /home/[redacted]/skyfield_data/expirations.py:25: RuntimeWarning: The file de421.bsp has expired. Please upgrade your version of `skyfield-data` or expect computation errors RuntimeWarning By default, the loading isn't blocked, but it's strongly recommended to upgrade to a more recent version, to make sure you're not going to make wrong astronomical computations. Developers We're providing a Makefile with basic targets to play around with the toolkit. use make help to get more details. In order to be able to run the download.py script, we recommend to run it from a virtualenv where you'd have installed the "dev" dependencies, using: make install-dev Note: This project is, and should be compatible with Python 2.6/2.7 and Python 3.5+, to be kept the same Python compatiblity that skyfield has. Data files de421.bspis provided by the Jet Propulsion Laboratory, deltat.dataand deltat.predsare provided by the United States Naval Observatory, Leap_Second.datis provided by the International Earth Rotation and Reference Systems Service. Software This Python Package code is published under the terms of the MIT license. See the COPYING file for more details. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/skyfield-data/
CC-MAIN-2020-10
refinedweb
630
55.34
Modify a query string on a url. The comments in the code should explain sufficiently. String_to_dict, and string_to_list are also useful for templatetags that require variable, thanks - this totally met my need. A few notes for others implementing this: You need to add "django.core.context_processors.request", to your TEMPLATE_CONTEXT_PROCESSORSsetting. You need to add this import to your lib/utils.py (or equivalent): And it is also worth mentioning that you need to create _response.html # This is terrific. I don't use GET parameters all that often, but for filtering/sorting content, this snippet really proved helpful. Thanks so much. # A great piece of code but it doesn't work with MultipleChoiceField ! The QueryDict object cannot deal with a list values, you have the use the lists function. A possible implementation might be (for the get_query_string method): ... def get_query_string(p, new_params=None, remove=None): # Have just posted this on JHsaunders' snippet, but it seems to apply here too: as it stands, this is vulnerable to a cross-site scripting attack because the URL variables previously provided by the user are passed through mark_safe with no escaping, apart from replacing space characters. This can be fixed by adding 'import urllib' to lib/utils.py, and changing the last line of get_query_string to: (Also, to be completely correct even when autoescaping is turned off, I suspect it should be using a plain '&' to delimit the arguments and passing it back as an unsafe string for the template layer to escape - but I'll leave that for someone else to confirm...) # Please login first before commenting.
https://djangosnippets.org/snippets/826/
CC-MAIN-2022-27
refinedweb
264
56.66
I'm getting started with C programming. I currently have a large file that contains a lot of functions. I would like to move these functions to a separate file so that the code is easier to read. However, I can't seem to figure out how to properly include/compile and can't find an example in any online tutorials that I've found. Here's a simplified example: #include <stdlib.h> #include <stdio.h> void func1(void) { printf("Function 1!n"); } void func2(void) { printf("Function 2!n"); } int main(void) { func1(); func2(); return 0; } How do you move C functions into a separate file? FYI: I'm using gcc. Update: These answers are very helpful, thank you. Now it seems that my simplified example is not good enough because I realized the reason my program failed to compile is because I'm using a global variable in my functions. #include <stdlib.h> #include <stdio.h> int counter = 0; void func1(void) { printf("Function 1!n"); counter++; } int main(void) { func1(); return 0; } Moving these functions to an external file doesn't work because they need to reference this global variable: #include <stdlib.h> #include <stdio.h> #include "functions.c" int counter = 0; int main(void) { func1(); counter = 100; return 0; } How can I get around this issue? The most common way is to place your function prototypes in a header file and your function implementations in a source file. For example: func1.h #ifndef MY_FUNC1_H #define MY_FUNC1_H #include <stdio.h> // declares a variable extern int var1; // declares a function void func1(void); #endif func1.c #include "func1.h" // defines a variable int var1 = 512; // defines a function void func1(void) { printf("Function 1!n"); } func2.h: #ifndef MY_FUNC2_H #define MY_FUNC2_H #include <stdio.h> void func2(void); #endif func2.c: #include "func1.h" // included in order to use var1 #include "func2.h" void func2(void) { printf("Function 2 with var1 == %in", var1); } main.c: #include <stdio.h> #include "func1.h" #include "func2.h" int main(void) { var1 += 512; func1(); func2(); return 0; } You would then compile using the following: gcc -c -o func1.o func1.c gcc -c -o func2.o func2.c gcc -c -o main.o main.c gcc -o myprog main.o func1.o func2.o ./myprog I only placed one function in each source/header pair for illustration. You could create just one header which includes the prototypes for all of the source files, or you could create multiple header files for each source file. The key is that any source file which will call the function, needs to include a header file which includes the function's prototype. As a general rule, you only want a header file included once, this is the purpose of the #ifndef #define #endif macros in the header files.
https://www.dowemo.com/article/70233/how-to-move-a-to-a-separate-file
CC-MAIN-2018-30
refinedweb
471
77.94
I started a course of study under a University in the Manchester area in 2009 after a long time in Health and Social Care and hadn't done any real programming since the mid 1990s, and after learning some programming principles, our first assignment involved computer graphics with Java's APIs. My first assignment was to draw a picture, and some examples from the previous year had been demonstrated to us to give us something to aim at. I noticed that all of these that had animations that flickered like a Sinclair ZX80. When I suggested to the lecturer that some sort of buffering should be used to stop this, he said words to the effect that buffering is a little complicated and perhaps beyond what was expected. This is not true: buffering is really quite simple to implement, and so I thought that I'd provide a simple example for you. Firstly, we want to draw a 5-point star, like this: Java Code:import java.awt.*; import java.applet.Applet; public class star extends Applet { public void paint(Graphics g) { g.drawLine( 0, 28, 30, 28); g.drawLine(30, 28, 39, 0); g.drawLine(39, 0, 50, 28); g.drawLine(50, 28, 79, 28); g.drawLine(79, 28, 55, 46); g.drawLine(55, 46, 64, 73); g.drawLine(64, 73, 40, 57); g.drawLine(39, 57, 15, 73); g.drawLine(15, 73, 23, 45); g.drawLine(23, 45, 0, 28); } } Okay, so we can draw a static image. Great, so how do we make it move? Well, you can go into the code and alter each point, which draws a line from x0 and y0 (the first two numbers) to x1 and y1. But if we can make things dynamic, then all the better. My personal feeling is that it's usually better to separate out the data from the code; as soon as this is data then we can manipulate it more easily. So we should now put these points into an array of type integer at the top of the code, and then loop through them. You'll get something like this: Java Code:import java.awt.*; import java.applet.Applet; public class star extends Applet { // }; // This will be used to loop through our array above: private static int index = 0; public void paint(Graphics g) { while(star[index]>=0) { int x0 = (star[index+0]); int y0 = (star[index+1]); int x1 = (star[index+2]); int y1 = (star[index+3]); g.drawLine( x0, y0, x1, y1 ); index += 4; } index = 0; } } Now all of those repeating lines of code from the first example have disappeared, and it makes the star easier to move around the canvas, at least in theory. Actually, we want to draw the star in its own method, but we need to understand some more stuff about an Applet: basically, there are several methods that need to be included in the code, and over-ride each of them. This is to do with the Applet life-cycle, so you should read through this documentation, and other good examples elsewhere. Have a look through now... Have you read it? Good, I'll continue. Okay, so we're going to need another import and also to implement the applet as runnable. Therefore, we have some methods to over-ride. So, let's look at the whole example, and read through the comments as hopefully they're providing some explanations to you as well. Please feel free to ask any questions. Java Code:import java.awt.*; import javax.swing.*; import java.awt.Graphics; /** * This will draw a five-point star in an Applet and then * bounce it around the canvas. * * @author Shaun B * @version 2012-10-31 */ public class starAnimation extends JApplet implements Runnable { // }; // Starting position of star: private int xAxis = 0; private int yAxis = 0; // Sets the height and width of the image: private int widthOfStar = 80; private int heightOfStar = 73; // Sets the direction of the animation // positive to move right/down and negative // to move left/up: private int xDirection = 1; private int yDirection = 1; // This will be used to get the width and height of the Applet private int width=0; private int height=0; // This will be used to index through the array above: private int index=0; // Read up about back buffering, as it's important ;-) private Image backBuffer = null; private Graphics backg = null; // This will be our thread, you need to know about threads too: private Thread runner = null; /** * Called by the browser or applet viewer to inform this JApplet that it * has been loaded into the system. It is always called before the first * time that the start method is called. */ @Override public void init() { //); // Gets the current width and height and creates a back buffer // to that height: width = getSize().width; height = getSize().height; backBuffer = createImage(width, height); // Creates instance of the back buffer: backg = backBuffer.getGraphics(); // Sets default behaviour as focusable: setFocusable(true); setVisible(true); } public void animate(int x, int y) { // Calls drawImage method: drawImage(xAxis, yAxis, star); } public void drawImage(int x, int y, int img[]) { // Sets the default foreground colour: backg.setColor(Color.black); // This will step through the array points to draw // the star object. There is probably also a fillPolygon // or drawPolygon method that could also be used:, incase the JApplet is reloaded or something: index = 0; } public void clearBackBuffer() { // This will clear the canvas so that there is no trail left by the star // by setting the default background colour and then filling it to the // width and height of the canvas: backg.setColor(Color.white); backg.fillRect(0, 0, width, height); } /** * Called by the browser or applet viewer to inform this JApplet that it * should start its execution. It is called after the init method and * each time the JApplet is revisited in a Web page. */ @Override public void start() { // Sets up the thread: if(runner == null) { runner = new Thread(this); runner.start(); } // Call to parent (not needed): // super.start(); } /** *. */ @Override public void stop() { // Call to parent: super.stop(); } @Override public void run() { // Checks if this thread has been set to runnable in the start method: Thread thisThread = Thread.currentThread(); while (runner == thisThread) { // Calls our method to draw the star: animate(xAxis, yAxis); try { // This is the time that it will pause in milliseconds // 1000 = 1 second: Thread.sleep(20); } catch (InterruptedException e) { } repaint(); // This will move the x and y co-ordinates of our object: xAxis += xDirection; yAxis += yDirection; // This will check the boundries of the current applet canvas: if(xAxis >= (width-widthOfStar)) { xDirection =-1; } if(xAxis <=0) { xDirection = 1; } if(yAxis >= (height-heightOfStar)) { yDirection =-1; } if(yAxis <=0) { yDirection = 1; } // Clears the canvas, so there is no 'trail' // left by the moving star: clearBackBuffer(); } } // Main paint method (called on repaint(); I think): @Override public void paint(Graphics g) { // Calls to the update method: update(g); } public void update(Graphics g) { // Gets the backBuffer and draws it to the canvas: g.drawImage(backBuffer,0,0,this); // the sync toolkit is used for animations as it stops flicker: getToolkit().sync(); } /** * Called by the browser or applet viewer to inform this JApplet that it * is being reclaimed and that it should destroy any resources that it * has allocated. The stop method will always be called before destroy. */ @Override public void destroy() { // Calls the garbage collector before calling parent: runner = null; System.gc(); super.destroy(); } }
http://forums.devshed.com/java-help/933219-japplet-example-buffer-last-post.html
CC-MAIN-2017-39
refinedweb
1,232
68.1
The trigonometric and hyperbolic functions provided in the standard mathematical library are listed in Table. Except for the atan2 function, which takes two arguments, all other functions take a single argument and each function returns a single value. Note that the function parameters and return values are of type double. The sin, cos and tan functions are used for the evaluation of sine, cosine and tangent, respectively, of a given angle, which must be specified in radians. The asin, acos and atan functions return arc sine (i. e., sin-1), arc cosine (i. e., cos-1) and arc tangent (i.e., tan-1), respectively. The function call atan2 (y, x) returns the arc tangent of y/x. The angle returned by these functions is in radians. Note that the argument of the asin and acos functions must be in the range -1.0 to 1.0, both inclusive; otherwise, they give domain error. Similarly, the argument of atan function must be in the range -π/2 to π/2. The functions sinh, cosh and tanh are used to calculate the hyperbolic sine, hyperbolic cosine and hyperbolic tangent, respectively. Table Trigonometric and hyperbolic functions in the standard library of the C language While using functions in this group, we may have to convert the angles between degrees and radians. These conversions are given as θr=(π/180) θd and θd=(180/π)θr, where θd and θr are angles in degrees and radians, respectively. Illustrates evaluation of trigonometric functions #include <stdio.h> #include <math.h> int main () { int A, i ; double pi = 3.14159, C,S,T,ARad, theta; clrscr(); printf("Angle A\t\tcos(A)\t\tsin(A)\t\ttan(A)\ndegrees\n"); for ( i =0; i<4; i++) { A = 60*i; ARad = A*pi/180; //converting angle from degrees to radians. C =cos (ARad); //calling trigonometric functions T = tan (ARad); S = sin (ARad); printf("%d\t\t%4.3lf,\t\t%4.3lf\t\t%4.3lf\n",A, C,S,T); } theta= asin(.5)*180/pi; // conversion of radians into degrees printf("sin inverse of .5 = % 3.3lf degrees\n", theta); return 0; } The output of sine, cosine, and tangent of different values of angle are given in tabular form. For evaluation of functions sin(), cos(), etc. the angle should be converted into radians. The return values of inverse trigonometric functions such as acos(), asin(), etc., are also in radians. These values may be converted into degrees if so needed. In the above results sin-1(.5) is evaluated in radians and converted to 30 degrees. However, if it is put as asin(l/2) the result may come out to be 0.0 because 1/2 is integer division with value
http://ecomputernotes.com/what-is-c/function-a-pointer/trigonometric-and-hyperbolic-functions
CC-MAIN-2018-51
refinedweb
454
59.5
uhlman <dkuhlman <at> rexx.com> writes: > > I have some questions about the use of jar files with jython. Off list, Marvin Greenberg explained this to me. Quoting from Marvin: The jython.jar does not contain the python library modules. You can do a jar -tvf or similar. You won't see __future__.py (or datetime.py or UserDict.py or any of the other modules that you will see in the jython Lib/ directory. When you run the jython script, it adds JYTHON_HOME/Lib to your path which is why you can find __future__.py. You should be able to see the same thing in your original test by adding JYTHON_HOME/Lib to you sys.pathbefore doing the import. If you want to package everything into a jar, you'll have to put the various python library modules you use into the jar too. The jython.jar only includes the builtin modules like sys. Marvin is suggesting that we selectively add items from JYTHON_HOME/Lib as we learn that they are needed. In what follows, I'm just going to add everything under Lib/. Our goal is to be able to create a file or files that can be deployed and used to run a Jython application on a machine on which Jython is *not* installed (but java installed, of course). Suppose we want to run our applicaton by executing: $ java -jar myapp.jar testfuture.py To build our jar, we first make a copy of jython.jar, then add the Lib/ directory to it: $ cd $JYTHON_HOME $ cp jython.jar jythonlib.jar $ zip -u -r jythonlib.jar Lib Then we copy this expanded jar file, and add modules that are specific to our application. I'm also going to add a path to an additional jar file to the manifest: $ cd ../Test $ cp $JYTHON_HOME/jythonlib.jar myapp.jar $ zip myapp.jar Lib/showobjs.py # Add path to additional jar file. $ jar ufm myapp.jar othermanifest.mf Where, othermanifest.mf contains the following: Class-Path: ./otherjar.jar Now I have a self-contained jar file that I can run without anything from my Jython installation. All that is needed is myapp.jar and otherjar.jar To run the application, we do: $ java -jar myapp.jar testfuture.py My testfuture.py follows. Note that it imports showobjs.py which came from my Lib directory and also imports __future__ which came from JYTHON_HOME/Lib. Also, it imports from another jar (otherjar.jar), which I added to the manifest in myapp.jar with the Class-Path jar property. Ivan Horvath gave us that piece of the puzzle on the jython-user list (in thread "jythonstandalone with 3rd party jars" on 9/26/07. The othermanifest.mf that I used to add the additional jar file contains: Class-Path: ./otherjar.jar Here is a listing of the contents of otherjar.jar: $ unzip -l otherjar.jar Archive: otherjar.jar Length Date Time Name -------- ---- ---- ---- 0 09-26-07 09:28 META-INF/ 71 09-26-07 09:28 META-INF/MANIFEST.MF 49 09-26-07 09:27 testotherjar.py -------- ------- 120 3 files Here is testfuture.py, which I use to start up my test app: # testfuture.py from __future__ import generators import sys import testotherjar import sys print 'sys.version:', sys.version print 'sys.path:', sys.path import showobjs def test(): a = ['aaa', 'bbb', 'ccc', ] showobjs.showlist(a) b = showobjs.simplegen() for item in b: print 'gen:', item print 'Testing other jar.' testotherjar.fromotherjar('davy') if __name__ == '__main__': test() Dave It is a good suggestion to add names in Change Log=20 As far as i know most of the Open Source Projects follow the same = tradition . ...eager to know what other ppl are thinking on this. -: Keshav upadhyaya=20 >>> "Mehendran T" <TMehendran@...> 09/25/07 12:41 PM >>>=20 Hi, I have looked at the change log of latest release(2.2.1rc1). It is a = good=20 feeling to see our bug fix in the list. But It would be so nice if you = add=20 names too in the change log as in the following=20 Jython NEWS Jython 2.2.1 rc1 Bugs fixed - [ 1773865 ] Patch for [1768990] pickle fails on subclasses (Mehendran= T) And one more suggestion is that, Active contributor names can be=20 added in the CONTRIBUTOR LIST.=20 Thanks and Regards, Mehendran T ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. _______________________________________________ Jython-dev mailing list Jython-dev@... Hi Charlie , Will you please review these patches and give me ur valuable suggestions = so that i will be=20 keep on submitting more patches. 67&atid=3D312867 67&atid=3D312867 67&atid=3D312867 67&atid=3D312867 thanks, keshav upadhyaya
https://sourceforge.net/p/jython/mailman/jython-dev/?viewmonth=200709&viewday=26
CC-MAIN-2018-17
refinedweb
788
78.04
Dynamically created Menu is triggered by space key Hi, When I have a menu which is created on demand, after first request - it can be triggered by space key press what is completely undesired. Here is example code: import QtQuick 2.9 import QtQuick.Controls 2.2 import QtQuick.Window 2.2 ApplicationWindow { visible: true property Menu menu: null Button { text: "get menu" onClicked: { if (!menu) menu = menuComp.createObject(this) menu.open() } } Component { id: menuComp Menu { width: 200; height: 400 MenuItem { text: "action 1" } MenuItem { text: "action 2" } } } } Do You think it is a bug or a kind of my misusing? I think it's because the menu gets the keyboard focus when it's created. Have You any idea how to get rid of this? I just checked activeFocusof menu is always falsewhether menu is visible or not and focusis falseby default The following seems to work: onClicked: { if (!menu) menu = menuComp.createObject(this) menu.open() menu.focus = false } @Wieland Works, but only first time... When menu is ignored by pressing ESCor clicking outside, problem comes back, but when menu item is selected menu doesn't react on space key. Mhh, maybe this? onClicked: { if (!menu) menu = menuComp.createObject(this) menu.open() focus = false // remove focus from the button on click } Brilliant! Thank You!
https://forum.qt.io/topic/82081/dynamically-created-menu-is-triggered-by-space-key
CC-MAIN-2017-34
refinedweb
215
67.45
Consider updating libffi to something more recent RESOLVED FIXED in mozilla32 Status () ▸ js-ctypes People (Reporter: RyanVM, Assigned: RyanVM) Tracking Firefox Tracking Flags (Not tracked) Details Attachments (3 attachments, 5 obsolete attachments) Created attachment 680397 [details] 3.0.11 changelog Mozilla is currently running on a snapshot of libffi from 08/2010. Version 3.0.11 was released in 04/2012. I don't know who would own this at this point, but I figured this should probably be on file anyway. Attached is the changelog from Mozilla's snapshot to the release of 3.0.11. I'm the effective owner of js-ctypes at this point, but I don't see any pressing reason to upgrade. It works fine for the limited ways in which we use it, so the risk-reward ratio of upgrading doesn't seem that great. Though it's not urgent for Mozilla, will we extract some fix for libffi from the latest source code? Such as Fix for a crasher due to misaligned stack on x86-32. I don't know whether the crash will happen in Fx. I have a feeling we probably would have seen the crashes, but it might be worthwhile to do at some point. Created attachment 757559 [details] 3.0.11->3.0.13 changelog Up to version 3.0.13 now.? And of course, none of the above means there won't be all new patches in need of making :) (In reply to Ryan VanderMeulen [:RyanVM][UTC-4] from comment #5) >? For the latter, i've just reported it in See Also: →. Flags: needinfo?(mh+mozilla) (In reply to Ryan VanderMeulen [:RyanVM UTC-4] from comment #7) >. Maybe the similar value such as ffi_type_pointer is defined as __declspec(dllexport) This is the relevant diff between what we have in the tree and 3.0.13. +/* Need minimal decorations for DLLs to works on Windows. */ +/* GCC has autoimport and autoexport. Rely on Libtool to */ +/* help MSVC export from a DLL, but always declare data */ +/* to be imported for MSVC clients. This costs an extra */ +/* indirection for MSVC clients using the static version */ +/* of the library, but don't worry about that. Besides, */ +/* as a workaround, they can define FFI_BUILDING if they */ +/* *know* they are going to link with the static library. */ +#if defined _MSC_VER && !defined FFI_BUILDING +#define FFI_EXTERN extern __declspec(dllimport) +#else +#define FFI_EXTERN extern +#endif + /* These are defined in types.c */ -extern ffi_type ffi_type_void; -extern ffi_type ffi_type_uint8; -extern ffi_type ffi_type_sint8; -extern ffi_type ffi_type_uint16; -extern ffi_type ffi_type_sint16; -extern ffi_type ffi_type_uint32; -extern ffi_type ffi_type_sint32; -extern ffi_type ffi_type_uint64; -extern ffi_type ffi_type_sint64; -extern ffi_type ffi_type_float; -extern ffi_type ffi_type_double; -extern ffi_type ffi_type_pointer; ; In other words, ffi kind of hardcodes being a shared library. Which is not the case for us when using the in-tree copy. As stated in the comment, we just need to define FFI_BUILDING when including the ffi header. Although that's a bit broad, you can add the following to js/src/Makefile.in and that should work: ifndef MOZ_NATIVE_FFI DEFINES += -DFFI_BUILDING endif Flags: needinfo?(mh+mozilla) Well, it builds now. Unfortunately, Windows still hates it. 08:01:06 WARNING - PROCESS-CRASH | C:\slave\test\build\tests\xpcshell\tests\toolkit\components\ctypes\tests\unit\test_jsctypes.js | application crashed [@ 0x1ba0048] 08:01:06 INFO - Crash dump filename: c:\docume~1\cltbld~1.t-x\locals~1\temp\tmpao1gnj\11633183-7f06-4dff-a7fe-1662b3106de8.dmp 08:01:06 INFO - Operating system: Windows NT 08:01:06 INFO - 5.1.2600 Service Pack 3 08:01:06 INFO - CPU: x86 08:01:06 INFO - GenuineIntel family 6 model 30 stepping 5 08:01:06 INFO - 8 CPUs 08:01:06 INFO - Crash reason: EXCEPTION_ILLEGAL_INSTRUCTION 08:01:06 INFO - Crash address: 0x1ba0048 08:01:06 INFO - Thread 0 (crashed) 08:01:06 INFO - 0 0x1ba0048 08:01:06 INFO - eip = 0x01ba0048 esp = 0x0012f380 ebp = 0x02b90200 ebx = 0x02dd7480 08:01:06 INFO - esi = 0x060bccc4 edi = 0x78ab07b5 eax = 0xffffffe3 ecx = 0x0012f344 08:01:06 INFO - edx = 0x01da5000 efl = 0x00010246 08:01:06 INFO - Found by: given as instruction pointer in context 08:01:06 INFO - 1 mozjs.dll!ffi_call + 0xa0 08:01:06 INFO - eip = 0x0065d8b1 esp = 0x0012f38c ebp = 0x02b90200 08:01:06 INFO - Found by: stack scanning 08:01:06 INFO - 2 mozjs.dll + 0x24d30f 08:01:06 INFO - eip = 0x0065d310 esp = 0x0012f390 ebp = 0x02b90200 08:01:06 INFO - Found by: stack scanning 08:01:06 INFO - 3 jsctypes-test.dll + 0x1747 08:01:06 INFO - eip = 0x02b81748 esp = 0x0012f3a8 ebp = 0x02b90200 08:01:06 INFO - Found by: stack scanning 08:01:06 INFO - 4 mozjs.dll!js::ctypes::AutoValue::SizeToType(JSContext *,JSObject *) [CTypes.cpp:79942e93f540 : 5275 + 0x8] 08:01:06 INFO - eip = 0x0064db21 esp = 0x0012f3bc ebp = 0x02b90200 08:01:06 INFO - Found by: stack scanning 08:01:06 INFO - 5 mozjs.dll!js::ctypes::FunctionType::Call [CTypes.cpp:79942e93f540 : 5827 + 0x1d] 08:01:06 INFO - eip = 0x00658c24 esp = 0x0012f3d8 ebp = 0x0012f4e4 08:01:06 INFO - Found by: stack scanning 08:01:06 INFO - 6 mozjs.dll!js::Invoke(JSContext *,JS::CallArgs,js::MaybeConstruct) [Interpreter.cpp:79942e93f540 : 482 + 0x29] 08:01:06 INFO - eip = 0x00453810 esp = 0x0012f4ec ebp = 0x0012f668 08:01:06 INFO - Found by: call frame info 08:01:06 INFO - 7 mozjs.dll!js::Invoke(JSContext *,JS::Value const &,JS::Value const &,unsigned int,JS::Value *,JS::MutableHandle<JS::Value>) [Interpreter.cpp:79942e93f540 : 539 + 0xc] 08:01:06 INFO - eip = 0x00453aa3 esp = 0x0012f670 ebp = 0x0012f708 08:01:06 INFO - Found by: call frame info 08:01:06 INFO - 8 mozjs.dll!js::DirectProxyHandler::call(JSContext *,JS::Handle<JSObject *>,JS::CallArgs const &) [jsproxy.cpp:79942e93f540 : 445 + 0x22] 08:01:06 INFO - eip = 0x00534583 esp = 0x0012f710 ebp = 0x0012f734 08:01:06 INFO - Found by: call frame info AArch64 (64-bit ARM) needs update as well. Port was added in commit. has testsuite update. . No idea about their release plans. Little Endian PowerPC guys requested new release in December but there was no reply. (In reply to Ryan VanderMeulen [:RyanVM UTC-5] from comment #12) > . Yeah, upstream is not really responsive on github.. Is there a chance for update anyway? I now have to keep 55KB patch just for libffi update. I'm happy to run your patch through Try if you want. I never made any progress after comment 10, FWIW. And obviously we can't land anything unless it works on all platforms :) I rebased my WIP patches and ran them through Try again. Everything looks good, except the Windows crashes persist :(. I need someone who can better understand that crash stack to help point in the right direction of what's going on here. Here's the Try push, which will be available to look at for the next 30 days. Another crash stack too. Comment 17 looks basically the same as comment 10. Just for kicks, I updated to libffi tip from the upstream Git repo. I see a win32 alignment fix in the post-3.0.13 changelog, so why not. Cool, libffi tip does appear to have fixed the Windows ctypes test crashes we were seeing with 3.0.13 :) However, we've still got the WinXP/Win8 (but *NOT* Win7 ?!?!) debug test failures. Interestingly, we also hit a similar-looking failure on one Win8 opt Cpp unit test run (in TestStartupCache) that went away on subsequent retriggers. AFAICT, the tests are crashing/hanging on startup. Unfortunately, we're not getting a usable stack trace when they do :( Created attachment 8380183 [details] 3.0.13 -> 51377b changelog The error codes from the logs convert to hex code 0xC0000135, which is STATUS_DLL_NOT_FOUND. I'm wondering if this is related to the linking issues we were hitting in comment 9. When I try to run the build in question, I get "The program can't start because MSVCR100D.dll is missing from your computer. Try reinstalling the program to fix this problem." I took a look at the try build logs. with 61a0549, msvcc.sh is invoked with -DFFI_BUILDING and -DFFI_DEBUG. with 8bad679, mvscc.sh is invoked with -fexceptions instead of the two above. translated in cl calls: with 61a0549, cl is given -DFFI_BUILDING, -RTC1, and not with 8bad679. Was your last try push with 8bad679 with --enable-debug disabled for ffi? ah, i forgot the main difference: with 61a0549, cl is given -MDd, with 8bad679, -MD > Was your last try push with 8bad679 with --enable-debug disabled for ffi? It was. So it would seem that --enable-debug is not working as expected... Created attachment 8383795 [details] [diff] [review] Relevant parts of 61a0549->8bad679 diff Here's what appear to be the relevant changes between revisions 61a0549 and 8bad679. Attachment #8383795 - Attachment is patch: true Attachment #8383795 - Attachment mime type: text/x-patch → text/plain Of course, the current code is also very different from rev 8bad679. On the Try push from comment 19, this is the output: sh.exe ./libtool --tag=CC --mode=compile c:/builds/moz2_slave/try-w32-d-00000000000000000000/build/js/src/ctypes/libffi/msvcc.sh -DHAVE_CONFIG_H -I. -Ic:/builds/moz2_slave/try-w32-d-00000000000000000000/build/js/src/ctypes/libffi -I. -Ic:/builds/moz2_slave/try-w32-d-00000000000000000000/build/js/src/ctypes/libffi/include -Iinclude -Ic:/builds/moz2_slave/try-w32-d-00000000000000000000/build/js/src/ctypes/libffi/src -O3 -warn all -c -o src/x86/ffi.lo c:/builds/moz2_slave/try-w32-d-00000000000000000000/build/js/src/ctypes/libffi/src/x86/ffi.c The Windows linker issues were resolved upstream. The fix to re-add code that had been accidentally removed. Try confirms that Windows debug is fixed now :) I'm in touch with Anthony (the upstream maintainer) and will try to get some of the remaining in-tree patches upstreamed and then try to get this landed soon after. Created attachment 8384586 [details] config.status from broken build More fun, I attempted to build this locally and ran into mozmake-specific bustage. 1:26.96 config.status: linking c:/mozbuild/src/mozilla-central/js/src/ctypes/libffi/src/x86/ffitarget.h to include/ffitarget.h 1:27.03 config.status: executing buildir commands 1:27.03 config.status: skipping top_srcdir/Makefile - not created 1:27.06 config.status: executing depfiles commands 1:29.03 config.status: executing libtool commands 1:29.30 config.status: executing include commands 1:29.33 config.status: executing src commands 1:29.41 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- -- J 1:29.41 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- 6 1:29.41 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- 4 1:29.41 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- 1:29.41 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- y 1:29.42 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- y 1:29.42 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- a 1:29.42 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- 1:29.42 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- A 1:29.42 C:\mozbuild\msys\bin\mozmake.EXE: invalid option -- N 1:29.42 Usage: mozmake.EXE [options] [target] ... 1:29.42 Options: 1:29.42 -b, -m Ignored for compatibility. 1:29.42 -B, --always-make Unconditionally make all targets. 1:29.42 -C DIRECTORY, --directory=DIRECTORY 1:29.42 Change to DIRECTORY before doing anything. 1:29.42 -d Print lots of debugging information. 1:29.42 --debug[=FLAGS] Print various types of debugging information. 1:29.42 -e, --environment-overrides 1:29.42 Environment variables override makefiles. 1:29.42 --eval=STRING Evaluate STRING as a makefile statement. 1:29.42 -f FILE, --file=FILE, --makefile=FILE 1:29.42 Read FILE as a makefile. 1:29.42 -h, --help Print this message and exit. 1:29.42 -i, --ignore-errors Ignore errors from recipes. 1:29.42 -I DIRECTORY, --include-dir=DIRECTORY 1:29.42 Search DIRECTORY for included makefiles. 1:29.42 -j [N], --jobs[=N] Allow N jobs at once; infinite jobs with no arg. 1:29.42 -k, --keep-going Keep going when some targets can't be made. 1:29.42 -l [N], --load-average[=N], --max-load[=N] 1:29.42 Don't start multiple jobs unless load is below N. 1:29.42 -L, --check-symlink-times Use the latest mtime between symlinks and target. 1:29.42 -n, --just-print, --dry-run, --recon 1:29.42 Don't actually run any recipe; just print them. 1:29.42 -o FILE, --old-file=FILE, --assume-old=FILE 1:29.42 Consider FILE to be very old and don't remake it. 1:29.42 -O[TYPE], --output-sync[=TYPE] 1:29.42 Synchronize output of parallel jobs by TYPE. 1:29.43 -p, --print-data-base Print make's internal database. 1:29.43 -q, --question Run no recipe; exit status says if up to date. 1:29.43 -r, --no-builtin-rules Disable the built-in implicit rules. 1:29.43 -R, --no-builtin-variables Disable the built-in variable settings. 1:29.43 -s, --silent, --quiet Don't echo recipes. 1:29.43 -S, --no-keep-going, --stop 1:29.43 Turns off -k. 1:29.43 -t, --touch Touch targets instead of remaking them. 1:29.43 --trace Print tracing information. 1:29.43 -v, --version Print the version number of make and exit. 1:29.43 -w, --print-directory Print the current directory. 1:29.43 --no-print-directory Turn off -w, even if it was turned on implicitly. 1:29.43 -W FILE, --what-if=FILE, --new-file=FILE, --assume-new=FILE 1:29.43 Consider FILE to be infinitely new. 1:29.43 --warn-undefined-variables Warn when an undefined variable is referenced. 1:29.43 1:29.43 This program built for Windows32 1:29.43 Report bugs to <bug-make@gnu.org> 1:29.43 Makefile:716: recipe for target 'all' failed 1:29.43 mozmake.EXE[5]: *** [all] Error 2 Attached is the config.status from the failed run. Attachment #8383795 - Attachment is obsolete: true Try run of version 3.1rc1 (build system defaults). And Windows w/ mozmake: Created attachment 8421469 [details] [diff] [review] Update to libffi 3.1 libffi 3.1 is out. This is just the straight-up replacement of the in-tree sources with the upstream. There will be more patches coming for the in-tree modifications and build system changes needed to actually get it to build (not as bad as you'd think, I swear!). Try run: And FWIW, I've been running with various upstream git revs in my local builds for months now with no ill effects, so I'm feeling good about this :) Assignee: nobody → ryanvm Attachment #680397 - Attachment is obsolete: true Attachment #757559 - Attachment is obsolete: true Attachment #8380183 - Attachment is obsolete: true Attachment #8384586 - Attachment is obsolete: true Status: NEW → ASSIGNED Created attachment 8421480 [details] [diff] [review] Modifications to upstream libffi for the Mozilla build These days, we only need a single one-liner to get libffi to build \m/. However, this does come with one big caveat - this removes some past hackarounds needed for pymake compatibility. This is generally well and good these days since pymake is deprecated on trunk in favor of mozmake/gmake 4.0 anyway (and mozmake is included with MozillaBuild 1.9+). However, AFAIK, pymake is still required to build comm-central, so this patch would break Thunderbird and Seamonkey if it lands as-is. I really hoped that jcranmer's big build system refactoring would land before libffi 3.1 was released, but that hasn't happened. So we can either sit on this patch until it does or I can re-add the pymake hackarounds. Thoughts? :) Attachment #8421480 - Flags: review?(mh+mozilla) Created attachment 8421481 [details] [diff] [review] Mozilla build system changes for libffi 3.1 A couple changes are needed on the Mozilla side of things for it to build. Attachment #8421481 - Flags: review?(mh+mozilla) My plan at this point is to first figure out who should officially do the review on the upstream update (I thought I heard the other day that jorendorff owns CTypes these days?). And I need a decision on the question in comment 33 about whether to maintain pymake compatibility or not. Once those are taken care of, this will land as a folded patch as there's really no sense in landing them separately anyway. Considering how quickly things are moving on c-c land, i'd rather work around pymake issues for now. Comment on attachment 8421469 [details] [diff] [review] Update to libffi 3.1 Review of attachment 8421469 [details] [diff] [review]: ----------------------------------------------------------------- OK. Attachment #8421469 - Flags: review?(jorendorff) → review+ BOOM GOES THE DYNAMITE! Status: ASSIGNED → RESOLVED Last Resolved: 4 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla32
https://bugzilla.mozilla.org/show_bug.cgi?id=810631
CC-MAIN-2018-22
refinedweb
2,798
59.9
In this article, we will start from a small explanation of process IDs and then we will quickly jump on to the practical aspects where-in we will discuss some process related C functions like fork(), execv() and wait() . Linux Processes Series: part 1, part 2, part 3 (this article). Process IDs Process IDs are the process identifiers that are non negative numbers associated with a process. These numbers are unique across the processes running in the system. This uniqueness of the process ID sometimes is used by the process to create some unique filenames. When a process is terminated from system, its process ID is made available for reuse. But there is a specific delay that is accounted before making the process ID available for reuse. This is because the process ID that was associated with the previous process that is now terminated may well be into use in form of a file name etc. So a delay is added before reusing the same process ID. Process ID 1 is for the init process. This is the first process that is started once a system boots up. The program file for the init process can be found either in /etc/init or in /sbin/init. The init process is a user level process but runs with root privileges and is responsible for bringing the system up to a state once the kernel has bootstrapped. The startup files read by the init process to achieve a certain state are - /etc/rc*.d - /etc/init.d - /etc/inittab Process ID 0 is of the scheduler of the system. It is a kernel level process responsible for all the process scheduling that takes place inside the system. Process Control Functions The fork() Function As already discussed in the article creating a daemon process in C, the fork function is used to create a process from within a process. The resultant new process created by fork() is known as child process while the original process (from which fork() was called) becomes the parent process. The function fork() is called once (in the parent process) but it returns twice. Once it returns in the parent process while the second time it returns in the child process. Note that the order of execution of the parent and the child may vary depending upon the process scheduling algorithm. So we see that fork function is used in process creation. The signature of fork() is : pid_t fork(void); The exec Family of Functions Another set of functions that are generally used for creating a process is the exec family of functions. These functions are mainly used where there is a requirement of running an existing binary from withing a process. For example, suppose we want to run the ‘whoami’ command from within a process, then in these kind of scenarios the exec() function or other members of this family is used. A point worth noting here is that with a call to any of the exec family of functions, the current process image is replaced by a new process image. A common member of this family is the execv() function. Its signature is : int execv(const char *path, char *const argv[]); Note: Please refer to the man-page of exec to have a look at the other members of this family. The wait() and waitpid() Functions There are certain situations where when a child process terminates or changes state then the parent process should come to know about the change of the state or termination status of the child process. In that case functions like wait() are used by the parent process where the parent can query the change in state of the child process using these functions. The signature of wait() is : pid_t wait(int *status); For the cases where a parent process has more than one child processes, there is a function waitpid() that can be used by the parent process to query the change state of a particular child. The signature of waitpid() is : pid_t waitpid(pid_t pid, int *status, int options); By default, waitpid() waits only for terminated children, but this behavior is modifiable via the options argument, as described below. The value of pid can be: - < -1 : Wait for any child process whose process group ID is equal to the absolute value of pid. - -1 : Wait for any child process. - 0 : Wait for any child process whose process group ID is equal to that of the calling process. - > 0 : Wait for the child whose process ID is equal to the value of pid. The value of options is an OR of zero or more of the following constants: - WNOHANG : Return immediately if no child has exited. - WUNTRACED : Also return if a child has stopped. Status for traced children which have stopped is provided even if this option is not specified. - WCONTINUED : Also return if a stopped child has been resumed by delivery of SIGCONT. For more information on waitpid() check-out the man-page of this function. An Example Program Here we have an example where we have made use of all the types of functions described above. #include <unistd.h> #include <sys/types.h> #include <errno.h> #include <stdio.h> #include <sys/wait.h> #include <stdlib.h> int global; /* In BSS segement, will automatically be assigned '0'*/ int main() { pid_t child_pid; int status; int local = 0; /* now create new process */ child_pid = fork(); if (child_pid >= 0) /* fork succeeded */ { if (child_pid == 0) /* fork() returns 0 for the child process */ { printf("child process!\n"); // Increment the local and global variables local++; global++; printf("child PID = %d, parent pid = %d\n", getpid(), getppid()); printf("\n child's local = %d, child's global = %d\n",local,global); char *cmd[] = {"whoami",(char*)0}; return execv("/usr/bin/",cmd); // call whoami command } else /* parent process */ { printf("parent process!\n"); printf("parent PID = %d, child pid = %d\n", getpid(), child_pid); wait(&status); /* wait for child to exit, and store child's exit status */ printf("Child exit code: %d\n", WEXITSTATUS(status)); //The change in local and global variable in child process should not reflect here in parent process. printf("\n Parent'z local = %d, parent's global = %d\n",local,global); printf("Parent says bye!\n"); exit(0); /* parent exits */ } } else /* failure */ { perror("fork"); exit(0); } } In the code above, I have tried to create a program that : - Uses fork() API to create a child process - Uses a local and global variable to prove that fork creates a copy of the parent process and child has its own copy of variables to work on. - Uses execv API to call ‘whoami’ command. - Uses wait() API to get the termination status of child in the parent. Note that this API holds the execution of the parent until child terminates or changes its state. Now, when the above program is executed, it produces the following output : $ ./fork parent process! parent PID = 3184, child pid = 3185 child process! child PID = 3185, parent pid = 3184 child's local = 1, child's global = 1 himanshu Child exit code: 0 Parent'z local = 0, parent's global = 0 Parent says bye! { 8 comments… read them below or add one } Hi, Thanks a lot, very nice article.. I really appreciate your tutorials, I’m a student and I have courses about Linux development. Your articles give another point of view about this subject, lots of fun! again, well written, concise, straight to the point. Examples have taken out all the fluff. I like this guy!!! return execv(“/usr/bin/whoami”,cmd); has to be return execv(“/usr/bin/”,cmd); @Chandra Thanks for pointing it out. It is fixed now. I tried the code, but I found a problem: the whoami command didn´ start. Finally, I replaced the line return execv(“/usr/bin/”,cmd); by return execv(“/usr/bin/whoami”,cmd); and everythig went fine. Excellent tutorial i found out that my first line output is child process? is that wrong?
http://www.thegeekstuff.com/2012/03/c-process-control-functions/
CC-MAIN-2015-11
refinedweb
1,323
70.63
2.1 Hadoop Block While processing a file in Hadoop ecosystem, it gets converted into a block or gets split into multiple blocks depending upon the size of the file. Each block is made up of a default size of 64 MB. Below diagram explains how a file of 512 MB is gets split into 8 different blocks Here the file that comes for processing has a size of 512 MB. So, Hadoop will convert it into 8 different blocks, each with 64 MB size (In this example we have kept the default block size which is 64 MB). Now instead of this file, 8 different blocks would be processed by Hadoop. Now question comes - what happens when your file size is less than block size? Here is the answer: Hadoop utilizes its resources based on the size of the file (remember it is Best for Big Data), however when a file comes for processing that has a size less than the assigned block size, then it will get processed in a single block. It has following drawbacks: - The block would be processed but with having a portion as empty and the remaining capacity of the big block would turn out to be a big waste. - If we have lot of small size files, then lot of latency would be introduced in the system as Hadoop will take time to process the entire block. Now you can notice what we mentioned in the 1st chapter - that Hadoop processing is good when you have large data in a single file and not when you have large number of files but with smaller data in it. - Another issue is that Name Node stores meta data information (block information, where a particular block is allocated) for every block and if lot of small files get processed then each time meta data information would be captured and the size of metadata information would go on increasing. So, how to deal with this kind of situation? Answer is to modify the default block size as per the size of file. In this case we could lower down the size from default value 64 MB to something else. Block Replication Hadoop system runs over the cluster of commodity computers (Known as Data Nodes) and when any of these Data Nodes fail, we are going to lose the data that is on that particular machine. In order to overcome this problem, Hadoop has a mechanism of Block replication among the Data Nodes. This means that the data that is available in one machine would get replicated and available on other machine as well. This will assure data availability all the time. The default replication factor is 3; this means that at any particular point of time there would be as many as 3 copies of the same data that would be available in the cluster of Hadoop. Now, One can think that by replicating same data we are introducing redundancy in the system. Definitely we are adding additional burden in the system by carrying same data over and over again. But think this way that, we took care of Node failure issue and most importantly we have saved the data. If we lose data then sometime you need to waste a lot of time and resources in getting back that data. 2.2 Hadoop Rack Rack consists of sequence of Data Nodes. Single Hadoop cluster may have one or more Racks depending on the size of the cluster. Let us understand how the Racks are arranged for small and large Hadoop cluster. Small Cluster This is a Hadoop cluster which is considerably small in size where all machines are connected by a single switch. In this type of cluster, we normally have a single Rack. For Block Replication (where Replication Factor = 3), first the Name Node copies data to one Data Node and then other two Data Nodes get selected randomly. Larger Clusters This Hadoop cluster involves many Data Nodes and is of the bigger size. And these different Data Nodes are scattered over many Racks. While implementing Block Replication (where Replication Factor = 3) Hadoop will make first copy of Block at one of the nearby Racks and then place the other two copies of Blocks on other Rack (on two different Data Nodes at random). Here in this scenario our data is available on two different Racks. We have taken care of two things – - Data Node Failure If any of the Data Node goes down then data would be available in other two Data Nodes. - Rack Failure In this case if one of the Racks (Even the Rack with two Data Nodes) goes down then our data would be available over other Rack (With one Data Node). In both the scenarios our data is safe and it will continue to ensure availability. 2.3 Hadoop NameNode NameNode is the main central driving component of HDFS architecture framework. It is a Master node, keeps a track of files assigned to Data Node. Its Major Responsibilities include: To maintain the file structure in the file system Keeps track of which file is located where in the cluster. When a file is processed in the form of several Blocks then Name Node knows the exact position of Block in the Hadoop cluster i.e. among the Data Nodes. Take care of communication with Client Application Client application sends a request to Name Node to perform some file related operation such as add, move, copy or delete. Then Name Node responds to the requests by returning a list of appropriate Data Nodes wherever data particular to that file is available. Update the Meta data table when Data Node fails Upon failure of a Data Node it is the responsibility of Name Node to replicate blocks (those that were running on that particular failed node) to some other Data Node available in Hadoop cluster. After that, it updates the Meta data information. Two different types of information it updates - First it maintains the information of Block associated with a file i.e. which Block maps to which file. - Second information is which Block available in which Data Node. The Name Node is a Single Point in Hadoop that means whenever Name Node goes down we lose all information available in it. We have something known as Secondary NameNode that was not the backup (Standby) of the Name Node. It only created checkpoints of the namespace. This was the configuration available before Hadoop 0.21 version, but after that we have Backup NameNode that is the mirror image of Name Node and can act as Name Node in the cases when actual Name Node goes down. Some best practices with Hadoop NameNode - Hardware specification has to be very good. - Store multiple copies of Meta data by having more than one Name Node directory. It is best to have these directories on separate disks so that in the event of disk failure Meta data would not be lost. - Add storage whenever free space available on disk of Name Node goes lower. - Try to avoid hosting Name Node and JobTracker on the same machine. 2.4 Hadoop DataNode Data Node is the component where actual data processing and execution takes place. We have already seen that these are normally commodity computers where actual data gets stored. Its Major Responsibilities include: Update to Name Node with their live status Every Data Node sends a heartbeat message to the Name Node every 3 seconds and conveys that it is alive. In the scenario when Name Node does not receive a heartbeat from a Data Node for 10 minutes, the Name Node considers that particular Data Node as dead and starts the process of Block replication on some other Data Node. Synchronization among Data Nodes All Data Nodes are synchronized in the Hadoop cluster in a way that they can communicate with one another and make sure of - Balancing the data in the system - Move data for keeping high replication - Copy Data when required Verification of data Data Node stores a Block and maintains the checksum for it. Data Node is accountable for verifying the data even before storing it and its checksum. A client writes data and sends it to cluster of HadoopData Nodes and the last Data Node in the pipeline verifies the checksum. The Data Node updates the Name Node with the Block information at a regular interval of time. If the checksum is not correct for a particular Block in the event of some failure then Data Node does not report that particular Block information to the Name Node. Client does a checking and when Blocks do not match with their available checksum then it will request the same Block from another Data Node which has a replication copy of that Block. Let us see some hardware configuration scheme for a Data Node - Hard Disk (15-20)- 1-5TB - Processor- 2-2.5GHz - RAM - 64-512GB - If you want more throughput then better go for a higher storage, recommended 8-10 GB Ethernet 2.5 Hadoop Job Tracker Apart from Master Slave relationship of Name Node and Data Node there is one more relation of the same kind in the Hadoop eco system and it is between Job Tracker and Task Tracker where Job Tracker acts as a Master and Task Tracker acts as Slave. If you notice then you will find Name Node and Data Node are responsible for majorly file management while here Job Tracker and Task Tracker are responsible for resource management. Job Tracker manages the Task Trackers and tracks the resource availability, progress of submitted Job and any occurring fault tolerance in the system. JobTracker receives requests from client and assigns this request to Task Trackers with the information of the tasks to be performed. While assigning task to Task Tracker, the Job Tracker first checks for Data Node in following order - Initially checks and assigns to a TaskTracker that is available locally in the Cluster. - Otherwise it assigns to the Task Tracker on Data Node that is available in the same Rack. - Finally if both the above mentioned two cases are not possible then it assigns to a Data Node in different Rack. In the event when Data Node fails, then Task Tracker assigns the task to other Task Tracker where the replica of the same task exist. This will any how make sure that the task is going to be completed. 2.6 Hadoop Task Tracker Task Tracker follows the order received from Job Tracker and its major responsibility is to update the Job Tracker with the progress of the Job being run. We have a WEB interface available in Hadoop ecosystem through which the current status and all information about the Job Tracker and the Task Tracker can be seen. Let us try to understand all its functioning in more detail – - The Task Tracker sends continuous heartbeat messages to Job Tracker to inform its live status. Task Tracker also updates with the available empty slots for running additional tasks. - The Task Tracker assures not to get down by spawning various JVM processes. - All the empty slots available in Task Tracker indicate the number of tasks that it can accept. 2.7 Map Reduce MapReduce programs are the core of Hadoop eco System built to compute large volumes of data in a parallel manner across cluster of computers. The basic feature of Map Reduce paradigm is to divide the task among various Data Nodes to perform execution on these Data Nodes and then in return collect the output from all these Data Nodes. The first part where we divide the task across cluster of computers is known as Map while getting the output and collect it back is known as Reduce. Let us understand it with this example, suppose we have three files with data as – File1 – This is my Book File2 – I am reading a Book File3 – Book is a good source of knowledge The task is to collect the frequency of word present in all three files combined So, output should be – Let us execute this task in Hadoop ecosystem. Assume we have three different instances of Data Nodes running in our cluster. All three files would be executed in three different Data Nodes. A Mapper program will be responsible for taking the frequency word count from each individual Data Node. And a Reducer program will be responsible for collecting frequency statistics from each individual Data Node and after that perform aggregation by doing sum of count for each word group. After doing this exercise we should receive output mentioned in the table. Let us see below diagram and understand Map Reduce Paradigm Map Reduce 2.8 Hadoop File system (HDFS) HDFS is Hadoop distributed file system. HDFS defines the way the storage will take place. HDFS is built to yield maximum throughput. Its efficient performance seen while working with large data sets.As we have already mentioned that HDFS is Highly Available and Very Scalable. We have also seen how by utilizing the concept of replication, it is fault tolerant. The effective Map Reduce paradigm boost its capability of high end distributed computing in a parallel environment. Below diagram shows how various components of HDFS are connected and work as a single massive unit. Name Node and the Job Tracker get list of all rack ids associated with their Slave Nodes. By using this information HDFS creates a mapping between IP address and received rack id. HDFS applies this knowledge for Block replication over different racks. Hadoop does one more thing very intelligently and that is identifying the performance of Data Nodes. If it identifies slowness of any Data Node while execution, then it immediately runs the same redundant job on a different Data Node. So, now the same task is being run over two different Data Nodes. Here whichever Data Node’s task gets completed first that gets reported to Job Tracker. We have already understood Hadoop architecture and now it is the time to start doing some visualization of Hadoop.
http://www.wideskills.com/hadoop/hadoop-components
CC-MAIN-2019-51
refinedweb
2,348
67.08
This is a simple MSN Messenger like chat application using socket programming. It allows the users to send and receive messages by running two Simple Messengers. You can read this article from my blog:. There are two specific features other than the regular MSN Messenger: Hex' No print on receiving'. A helper class wraps a Socket and a byte array. public class KeyValuePair { public Socket socket; public byte[] dataBuffer = new byte[1]; }(); } Since the AsyncCallback is used for the server, receiving the data will be different. KeyValuePair aKeyValuePair = (KeyValuePair)asyn.AsyncState; //end receive... int iRx = 0; iRx = aKeyValuePair.socket.EndReceive(asyn); if (iRx != 0) { byte[] recv = aKeyValuePair.dataBuffer; } byte[] recv = new byte[client.ReceiveBufferSize]; //you can define your own size int iRx = client.Receive(recv ); if (iRx != 0) { //recv should contain the received data. } Sending the data is as easy as calling the Send() method that comes with the Socket. E.g. soc.Send(dataBytes);//'soc' could either be the server or the client In Server's receive block, you might want to handle the SocketException with ErrorCode == 10054. This error code indicates the connection reset for peers. Refer to MSDN. if (e.ErrorCode == 10054)//Connection reset, // { /* ... */ }. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/miscctrl/SimpleMessenger.aspx
crawl-002
refinedweb
204
62.44
Rich adds support for Jupyter Notebooks I recently added experimental support for Jupyter Notebooks to Rich. Here's an example of the kind of output it can generate: This is the Rich test card you can generate in the terminal with python -m rich. Rich is a library for fancy formatting to the terminal, where-as Jupyter supports HTML natively. So why integrate one with the other? I suspect it may be useful to render something like a table in the terminal which also displays nicely in Jupyter. Jupyter does convert ANSI codes for style / colour etc in to HTML, which is nice. But it didn't work with Jupyter initially, because Jupyter replaces stdout with a file object that captures output, and that that object doesn't declare itself as a terminal capable device. It also broke tables and any other formatting that assumes a fix grid, because the browser wraps output which caused misalignment of box characters. Fortunately I was able to workaround both those issues. You can try this out by doing from rich.jupyter import print within Jupyter. This will give the print function a few super-powers: from rich.jupyter import print print("Hello, [green bold]World![/] :ninja:") Now you can insert colour and style in to your output. Rich will pretty print Python data structures with syntax highlighting, and you can render of the renderables supported by Rich. I'm not sure how useful this is to anyone. I had barely used Jupyter before Sunday afternoon, but if it is useful please tweet @willmcgugan and let me know. I'm not sure if its worth making this an official part of the Rich API, or just leave it as a novelty I hacked up to avoid boredom while self-distancing. Looks good, I use jupyter day to day and would definitely like to try it out when available on pypi It’s on Pypi now. Would be interested in feedback. This looks awesome! I work from a Jupyter notebook 90% of the time and would particularly appreciate support for the progress bar and traceback functionality. This is such a great feature, I work from Jupyter Lab environments mostly. So this support is great to see thank you.
https://www.willmcgugan.com/blog/tech/post/rich-adds-support-for-jupyter-notebooks/
CC-MAIN-2022-27
refinedweb
373
73.17
Introduction: Raspberry PI L298N Dual H Bridge DC Motor There are not any examples that I could find that properly show how to get a simple DC motor working with RaspberryPI. This is a simple tutorial for "How to make a motor turn." Robots, wheels, conveyors, and all sorts of stuff can be made to move using simple Raspberry Code and really inexpensive RC-racer like motors. The parts for this can be purchased on eBay for about $10 (certainly under $20). This project assumes you have access to wires, old power supplies (scavenged from old electronic devices), and misc other items. I would think this to be a fundamental must have in the journals of how-tos. After figuring it out from a confluence of different sources and some hacking. We have for you the simple steps for getting a standard DC motor to turn with a python code snippet on a raspberry pi. Here's a youtube using an Arduino, not quite the same thing: What is an "H-Bridge" and how does a DC motor change direction? When you hook a dc motor up to {+} wire on one lead and {-} wire on the other lead, the motor turns a certain direction. When you switch the wires to other leads, the motor turns the other direction. But how do we "electronically" switch the electric polarity using circuits. ANSWER: H-Bridge. The circuits for H-Bridges are non-trivial for the beginner. The L298N module is a simple ready to rock and roll "dual" H-bridge circuit board. That means you have the ability to control TWO motors in either direction. You need to get the module board for this. Parts: CONTROLLER: 5 PCS Dual H Bridge DC Stepper Motor Drive Controller Board Module Arduino L298N Double H bridge drive Chip: L298N Logical voltage: 5V Drive voltage: 5V-35V Logical current: 0mA-36mA Drive current: 2A(MAX single bridge) Storage temperature: -20 to +135 Max power: 25W CHEAP DC MOTOR w/ GEAR BOX Car Gear Motor TT Motor Robot... 100% brand new and high quality) REFERENCE: Another instruction on doing pretty much same thing! Step 1: Extra DC Power Supply Dedicated for Motor Note the sample image has two power supplies, one for the Raspberry PI (see elsewhere for details.) The motor needs a DC power supply in the Voltage range of the motor. I used a left over 5V old power supply. Make sure the power supply has enough amperage or power to make the motor turn what ever you're looking to drive. OLD POWER SUPPLIES (power chargers) Do you have an old toy, used electronics, broken something around the house that isn't used anymore. These old chargers and DC power plugs are awesome, cause they are old or connector is ruined or damaged, you can still use them for hacking and building stuff. Learn to read about DC power, (See the picture of the old cell phone charger these are mostly 5V, as this one is and current drive is pretty low like only 400mA).... The sizing of the power supply is outside of this scope. In my case, we are just getting the motor to turn one direction or the other. BATTERY PACK: You can use the power from batteries to run your motors too. Purchase 6V DC batteries, (NOTE: batteries are already DC power) to drive your motors. (see picture sample above). NEVER USE RASPBERRY PI built in power to DRIVE motors. DC MOTOR SIZING: The power to your motor needs to match. The motors we have are 6V DC hobby motors. If you're building something for real rather than just a quick learning tutorial like this one. Make sure you match you power drive to the motor requirements. Every motor needs different drive voltages. If you have less voltage than the motor needs then the motor will be under powered and may not turn what you want or might burn your power supply out, or worst start melting stuff. If your power supply is too high like for example you put 9 or 12 volts into a 6 volt motor, then the motor will turn really fast and hard. You see people hack their kids toy race cars with bigger batteries, the cars go fast and have a ton more power than the default design. These tricks always come at a costs for safety, reliability, and component breaking (melting, fires etc). In our simple getting a motor to turn tutorial, a 5vDC old phone charger will suffice to drive a 6v motor, it's not optimum but it will work for a free spinning motor. If the motor has torque or load in excessive the electrical power capabilities of the power supply the motor will likely stall. Go read about motor sizing, stalling, and hacking motors as your next learning adventure after this basic introduction. Sample how to using chip instead of the controller: Alternative reference how-to using the CHIPs on bread board instead of controller board. Step 2: Connector Wires for Driving the Direction on the Motor. The "GPIO" pins on the Raspberry PI will be used to "trigger" the l298n direction. There are TWO pins per motor. One pin drives the power, {+} to one lead when the pin is activated. If you have a PI2 or the newer B+ then there is a different pin layout for that. I did this one with the older base model B. I like the pin outs that show the actual unit so you can rotate it to match the one on your desk next to your machine and get the correct pins. I'm always spatially screwing up, so I'll end up hooking up the wrong pins and on the raspberry pi, the two wrong pins together could cause the poor thing to burn out or atleast only crash and reboot (this has happened to me many times, so far lucky not the burn the PI up yet). I'll talk more about the GPIO numbers versus the physical pin numbering nomenclature when we get to the code review. Suffice to say, if your code is not controlling the motors on/off directions, then likely you have the physical wires connected to the wrong pins or your code is referencing GPIO vs physical pin or visa-versa. TROUBLESHOOTING TRICK Check your physical pins with a volt meter or LED glow to make sure they are putting output when your code indicates the pin should be "HIGH" or energized. Step 3: BREAD BOARD for Simplifying Wire Connections. I used a bread board to make the wire connections easier. Trust me on this for your first time, get yourself a cheap bread board or just buy or find a friend with a full Maker Kit. Even a cheap prototyping green or brown board are good for hacking wires together. Plus they boards hold them in place from moving around and possibly shorting out. 1> hook your Raspberry USB micro charger up to your standard PI input. 2> hook your 2nd power supply for your DC motor up to the + - on the bread board. There's a few WHY and HOW to use electronic bread boards here on Instructables.... Link to instructable "breadboard" how -to use. Step 4: Code Snippet Python Sample Make Motor Turn One Way Then Other Code snippets for python. This page assume you have enough knowledge of python to know it's a programming language that's human readable and typed into a text file. Some experience with copy and paste will be required. Read the error messages and happy hacking your python motor codes. GPIO vs Physical Pin Nomenclature: There are virtual "GPIO" names for certain pins that are digital output pins, these overlay on top of or in lieu of the actual PIN numbering from the board layout. When writing your code you'll need to indicate which one of the pin numbering systems that you're using. The board pin number or the GPIO. <p>GPIO.setmode(GPIO.BOARD)</p> SIMPLE CODE SNIPPET Here's the basic python code snippet for turning pins on/ off: )</p> motor.py # Import required libraries import sys import time import RPi.GPIO as GPIO # Use BCM GPIO references # instead of physical pin numbers #GPIO.setmode(GPIO.BCM) mode=GPIO.getmode() print " mode ="+str(mode) GPIO.cleanup() # Define GPIO signals to use # Physical pins 11,15,16,18 # GPIO17,GPIO22,GPIO23,GPIO24 StepPinForward=16 StepPinBackward=18 sleeptime=1 GPIO.setmode(GPIO.BOARD) GPIO.setup(StepPinForward, GPIO.OUT) GPIO.setup(StepPinBackward, GPIO.OUT) def forward(x): GPIO.output(StepPinForward, GPIO.HIGH) print "forwarding running motor " time.sleep(x) GPIO.output(StepPinForward, GPIO.LOW) def reverse(x): GPIO.output(StepPinBackward, GPIO.HIGH) print "backwarding running motor" time.sleep(x) GPIO.output(StepPinBackward, GPIO.LOW) print "forward motor " forward(5) print "reverse motor" reverse(5) print "Stopping motor" GPIO.cleanup() Run the motor from the raspberry pi command line like this: sudo python motor.py TROUBLESHOOTING add print statement to gauge progress. <p>GPIO.setup(Motor1E,GPIO.OUT)<br> print "Turning motor on" GPIO.output(Motor1A,GPIO.HIGH)</p> Step 5: Troubleshooting. Tools & Tricks: - VOLT Meter is requisite for troubleshooting. - Code snippets are helpful too. Debugging Items: - Does the motor work if you put power directly to it from the 2nd power supply? - Does the 2nd power supply have a voltage near or about the rated level of the motor when read with meter? - Are the screws properly tighten on the wires? - Did you connect {-} or "ground" between the controller module and Raspberry PI? - Are the Raspberry PI GPIO pins properly connected to the controller board? - Are the Raspberry PI GPIO pins correct? - Is the Code referencing the right pin numbers for HIGH & LOW (GPIO vs actual nomenclature?) - Do you have the controller wires connected to the right set of pins left side are left 2, right side are right 2? (Idiot question I know, but stupider things have failed the space shuttle.) - Did you use the "sudo" command before typing your python motor.py ? - Did you spell any of the the commands with capital letters or mix case? Linux is case specific. - How far in the code does the print "code got this far before error" lines occur to debug your code? 2 People Made This Project! Recommendations 9 Comments Question 2 years ago on Step 2 Very nice tutorial, however To use 2 dc-motors I have a Grove I2C motor driver (Seeed). Do you have instruction how to make this happen on my RPi 3b? Thank you. 4 years ago How do you get this working with variable motor speeds? How do you get the pi sending two PWM signals at different duty cycles? 5 years ago There is a video on Youtube that explains it very nice with a two wheeled robot: but I'm interested in how to control 4 motors. I guess I need 2 H Bridge Dual controller chips. Any tips on how to control that? Now that I think about it, as long as I can connect 2 L298N I guess I should be able to control them both through the GPIO. Reply 4 years ago No, you dont need 2, simply connect 2 motors to each side of the H-Bridge. Just stick the 2 red wires in one and 2 black in the other! Works for me! 5 years ago I can bet my motors to spin on a small robot chassis, but only when the wheels are not touching the ground. Otherwise the wheels turn too slowly. I'm using 6V for the L298, external battery pack for the RPi. Any suggestions? Reply 5 years ago i use 9v they work fine :) 5 years ago Thanks Sir i really had a tough time to find tutorial on this! 6 years ago Hi. nice ible. I'm using them for steppers. Do you know if it's possible to use two boards parallel? I fried a board because my stepper is using to much current. By parallel I mean stacking one on top of another and connect all terminal pins parallel and also the jumper pins. will the current equal divide? The max is 2A amps. I need 4A. Thanks. 6 years ago on Introduction Great info on H-Bridges! Thanks for sharing!
https://www.instructables.com/Raspberry-PI-L298N-Dual-H-Bridge-DC-Motor/
CC-MAIN-2021-39
refinedweb
2,059
72.66
Archives Sometimes it's Better to Check Even Really Obvious Dialog Defaults If you only have a C# project file named e.g. "foobar.csproj" in a directory, opening it in the Visual Studio IDE (VS.Net 2003) will create a solution file for you. Either when saving or exiting you'll be asked for the file name, with the default name being e.g. "foobar.sln". After being asked for solution file names over and over again, I never looked at the default name again - I simply hit Ok as soon as the dialog popped up. Tip: You shouldn't do this if your project filename contains a dot. Because for a project named e.g. "foo.bar.csproj", the default name of the automatically generated solution file is "foo.sln", not "foo.bar.sln" as one would expect. For "X.Y.Z.csproj" the default name is "X.Y.sln", for "A.B.C.D.csproj" the default name is "A.B.C.sln", and so on. In contrast, if you create a new project from scratch, the solution name is the same as the project name, even if the project name contains a dot. Don't Underestimate the Benefits of "Trivial" Unit Tests Take a look at the following example of a trivial unit test for a property. Imagine a class MyClasswith a property Description: public class MyClass { ... private string m_strDescription; public string Description { get { return m_strDescription; } set { m_strDescription=value; } } ... } And here's the accompanying unit test: ... [ Test ] public void DescriptionTest() { MyClass obj=new MyClass(); obj.Description="Hello World"; Assert.AreEqual("Hello World", obj.Description); } ... Question: What's the benefit of such a trivial test? Is it actually worth the time spent for writing it? As I have learned over and over again in the past months, the answer is YES. First of all, knowing that the property is working correctly is better than being "really, really sure". There may not be a huge difference in the moment you write the code, but think again a couple weeks and dozens or hundreds of classes later. Maybe you change the way the property value is stored, e.g. something like this: public string Description { get { return m_objDataStorage.Items["Description"]; } set { m_objDataStorage.Items["Description"]=value; } } Without the unit test, you would test the new implementation once, then forget about it. Now another couple of weeks later some minor change in the storage code breaks this implementation (causing the getter to return a wrong value). If the class and its property is buried under layers and layers of code, you can spend a lot of time searching for this bug. With a unit test, the test simply fails, giving you a pretty good idea of what's wrong. Of course, this is just the success story... don't forget about the many, many properties that stay exactly the way they were written. The tests for these may give the warm, fuzzy feeling of doing the "right thing", but from a sceptic's point of view, they are a waste of time. One could try to save time by not writing a test until a property's code is more complicated than a simple access to a private member variable, but ask yourself how likely it is that this test will be written if either a) you are under pressure, or b) somebody else changes the code (did I just hear someone say "yeah, right"? ;-). My advice is to stop worrying about wasted time and to simply think in terms of statistics. One really nasty bug found easily can save enough time to write a lot of "unnecessary" tests. As already mentioned, my personal experience with "trivial" unit tests for properties has been pretty good. Besides the obvious benefits when refactoring, every couple of weeks a completely unexpected bug gets caught (typos and copy/paste- or search/replace mistakes that somehow get past the compiler). XML Documentation Comment Tooltips (Wouldn't it be cool, part 5) While: - Wouldn't it be cool...? (about editing XML documentation comments - Intellisense (Wouldn't it be cool, part 2) - Find (Wouldn't it be cool, part 3) - XML Documentation Comments (Wouldn't it be cool, part 4) (doc comments e.g. for interface implementations).
http://weblogs.asp.net/rweigelt/archive/2004/4
CC-MAIN-2015-14
refinedweb
709
65.12
tag:blogger.com,1999:blog-8712770457197348465.post6604050668377450608..comments2013-12-04T18:19:22.248-08:00Comments on Javarevisited: How to avoid deadlock in Java ThreadsJavin Paul all, can you explain what is the diff between ...hi all, can you explain what is the diff between race condition & deadlock? & how can deadlock be preventive?<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-18367608698065453242013-06-22T10:26:05.683-07:002013-06-22T10:26:05.683-07:00Excellent blog - discussions here are very informa...Excellent blog - discussions here are very informative. ThanksAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-10084949537279858002013-05-01T20:33:44.424-07:002013-05-01T20:33:44.424-07:00Just remember, how critical is to write a deadlock...Just remember, how critical is to write a deadlock free concurrent application in Java, because only way to break deadlock is to restart the server. If you could create deadlock detector at runtime, which can also break deadlock as and when they happen, may be by intruppting or killing thread, without losing data invariant than, it would be just fantastic.Steynnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-55171839996841166792013-04-30T00:26:27.404-07:002013-04-30T00:26:27.404-07:00If you talk about How to detect deadlock in Java, ...If you talk about How to detect deadlock in Java, then there is a cleaver way by using ThreadMXBean, which provides a method called findDeadLockthreads() which returns id of all those thread which are in deadlock and waiting for each other to release monitor or acquire lock. here is the code :<br /><br />ThreadMXBean threadMBean = ManagementFactory.getThreadMXBean();<br /> <br />Gauravnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-37548415249988252012013-04-03T20:07:44.635-07:002013-04-03T20:07:44.635-07:00Imposing ordering is an example of avoiding deadlo...Imposing ordering is an example of avoiding deadlock by braking "Circular Wait" condition. As you know, in order for a deadlock to occur, four condition must met :<br /><br />1) Mutual Exclusion<br />2) Hold and Wait<br />3) No Preemption<br />4) Circular Wait<br /><br />Though you can break any of these conditions to avoid deadlock, it's often easy to break circular wait. But, evenRavinoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-24204395975457135412013-03-12T22:54:56.097-07:002013-03-12T22:54:56.097-07:00You can also use TIMED and POLLED locks from Reent...You can also use TIMED and POLLED locks from ReentrantLock to have a probabilistic deadlock avoidation. By using timed and polled lock acquision approach, one thread will eventually back-off, giving another thread to either acquire or release the lock. Though this is not guaranteed, it certainly helps to reduce probability of deadlock in Java program.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-48879549154386887752013-01-07T04:15:12.504-08:002013-01-07T04:15:12.504-08:00Hi Pallavi,not a problem, threads in Java are alwa...Hi Pallavi,not a problem, threads in Java are always confusing. original order of acquring lock in two methods are opposite e..g in method 1 its lock 1-->lock2 while in method 2 its lock2-->lock1 which can result in deadlock because if two thread calls method 1 and method 2 thread 1 may end of lock 1 and thread 2 end of lock 2 and both will wait for other locks, i.e. deadlock.<br /><br />ByJavin @ CyclicBarrier Example Java, Its a gr8 article.. I understood how it result...Hi,<br />Its a gr8 article.. I understood how it results in deadlock but I didnt understand the explanation behind the fix.. I mean u just changed the order. Could u explain the flow please as to how deadlock will not arise ones the order s changed Threads n Java has been my weakest area hence the unclarityPallavi S Arseny Kovalchuk your example fails ie( dead ...Dear Arseny Kovalchuk your example fails ie( dead lock not happens) if first ThreadOne Class completely got executed please check it outbala im arun,the article was very clear for me to kn...hi im arun,the article was very clear for me to know about deadlocks....thanx authorAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-83675286428616379032012-09-17T05:05:01.696-07:002012-09-17T05:05:01.696-07:00Very Right and it has solved my problem. Thanks, ...Very Right and it has solved my problem.<br /><br />Thanks,<br />AnkitAnkit Agrawal m totally impressed with answer...hope you will ...i m totally impressed with answer...hope you will provide as such answers in future...!!!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-70912546007230346152012-07-01T04:51:06.103-07:002012-07-01T04:51:06.103-07:00Thanks Javin..! It was very helpful.Thanks Javin..!<br />It was very helpful.A.K.Nayak. event of deadlock in java application you can d...On event of deadlock in java application you can do following to investigate it:<br /><br />1) By pressing Ctrl+Break on Windows machines print thread dump in console of Java application. by analyzing this thread dump you will know cause of deadlock.<br /><br />2) On Linux and Solaris machine you can take thread dump by sending SIGQUIT or kill -3 to Java application process id. From Java 6 Fahadnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-83154584016367710212012-05-13T07:10:28.414-07:002012-05-13T07:10:28.414-07:00good one ! thanks .good one ! thanks .Ohad are right Javin, tryLock() false just indicate...You are right Javin, tryLock() false just indicates failure to acquire lock in the specified time. But if we carefully choose the time, based on the context that would only be possible in case of a deadlock (e.g. 10 or 20 minutes) than it would be unlikely that its anything other than a dead lock.<br /><br />Using kill -3 is a good way, but it requires a manual intervention at the particular timeSantosh Jha for your comment Santosh, isn't tryLock...Thanks for your comment Santosh, isn't tryLock() can return even if there is no deadlock I mean it just not been able to acquire the lock within specified period. I guess taking a thread dump by using kill -3 is another way of finding out which thread is waiting for which lock and than detecting deadlock pattern. JConsole also helps you to find deadlock in Java. Anyway tryLock() can also be Javin @ thread interview questions Javin, I came across a scenario of deadlock in ...Hi Javin,<br />I came across a scenario of deadlock in my application and after struggling for a while to detect it, I used the java.util.concurrent.locks.Lock.tryLock(time, unit) method which waits for specified time to acquire a lock and returns a boolean. I used this method for every lock in that scenario and logged the scenario when it could not acquire the lock. This helped me identify and Santosh Jha Arseny ,this discussion make it more clear....Thanks Arseny ,this discussion make it more clear. I am using blogger and seems [code] is not working, any way I will definitely try to formatting more clear.Javin @ abastract class in Java, BTW, if you are using WP, try [pre] instea...@Javin, BTW, if you are using WP, try [pre] instead of [code].<br /><br />Regards, Arseny.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-28548267381949337462011-10-19T06:03:14.956-07:002011-10-19T06:03:14.956-07:00@Javin, now it's clear, sorry :) The formattin...@Javin, now it's clear, sorry :) The formatting confused me. I recognized it as synchronized block in series, but not as nested. So, the sample is the same.<br /><br />Regards, Arseny.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-66591467915864293042011-10-19T05:31:27.654-07:002011-10-19T05:31:27.654-07:00Hi Arseny Kovalchuk, Your Explanation is quite cle...Hi Arseny Kovalchuk, Your Explanation is quite clear and indeed in this condition deadlock will occur, but in my program also synchronized blocks are nested because until its nested probability of deadlock reduces, is it formatting or am I missing something ?Javin @ java interview questions, The main difference in my sample is that t...@Javin, The main difference in my sample is that the synchronized block in the run methods are nested. And the major thing to get a deadlock is<br /><br />1. threadOne acquires the lock on monitorOne<br />2. threadTwo acquires the lock on monitorTwo<br />3. threadOne wants to get lock on monitorTwo, but !mportant! it should not release lock on monitorOne, until it gets the lock on monitorTwo<br /Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-63358188459528514382011-10-19T04:58:46.299-07:002011-10-19T04:58:46.299-07:00Hi Arseny Kovalchuk, Thanks for your example but w...Hi Arseny Kovalchuk, Thanks for your example but will you explain what is difference between earlier example and this for everybody's understanding , what I see run() method of your example is similar to method1, Also to get guaranteed deadlock one can use Sleep to hold one thread to hold the lock and increase deadlock chance , isn't it ?Javin @ thread interview questions this example there is a very bit chance to g...Using this example there is a very bit chance to get deadlock! Just try this example :) You'll be running it for years to get a deadlock.<br /><br />I'd like to suggest following sample that produces deadlock with 100% guarantee.<br /><br /><br />public class DeadLockTest<br />{<br /><br /> static class ThreadOne implements Runnable {<br /> <br /> public void run()<br />Anonymousnoreply@blogger.com
http://javarevisited.blogspot.com/feeds/6604050668377450608/comments/default
CC-MAIN-2013-48
refinedweb
1,620
56.25
I am trying to upgrade my application from aspose.pdf.kit v 2.6.3 to v 3.0.0. However the AddImage and AddText functions of the PdfFileMend class are behaving differently in v 3.0 then they did in v 2.6.3<?xml:namespace prefix = o How the application works is it takes a PDF file and then places a white image over a place on the PDF using the AddImage Function. Next I use the Addtext function to add text ontop of the image that was previously added. This works in v 2.6.3 and the text is fully visible and is placed on top of the image. However when I run my code in v 3.0 the image is placed on top of the text and the text can not be read.
https://forum.aspose.com/t/addimage-and-addtext-in-v-3-0-0/127942
CC-MAIN-2021-10
refinedweb
139
76.82
c++ - some help controlling excel file with C++ - george_y libero.it Jan 03 2005 I need some help to generate with a C++ programm a file in Excel format (or at least in a way that all the information in it can be read by Excel) containing a graph with the data I choose. As an example I include a simple script that generates an output file with two columncontrollings of integers and I` d like to add in it a graph in the same file so that when you open the output file aaa.xls with Excel both the numrical data and graph are present. Thanks, George #include <fstream> using namespace std; int main() { double a[10]= {1,2,3,4,5,6,7,8,9,10}; double b[10]= {11,12,13,14,15,16,17,18,19,20}; ofstream fout; fout.open("aaa.xls"); int n= 0; for(n= 0; n< 10; n++) { fout<< a[n]<<"\t"<<b[n]<<endl; } fout.close(); return 0; } George Jan 03 2005
http://www.digitalmars.com/d/archives/c++/4421.html
CC-MAIN-2015-11
refinedweb
171
76.66
Cost. topic. Do it right, and you can run super lean, drive down the cost to serve and ride the cloud innovation train. But inversely do it wrong, treat public cloud like a datacentre then your costs could be significantly larger than on-premises. If you have found this post, I am going to assume you are a builder and resonate with the developer, architect persona. It is you who I want to talk to, those who are constructing the “Lego” blocks or architecture, you have a great deal of influence in the efficiency of one’s architecture. Just like a car has an economy value, there are often tradeoffs. Have a low liters per (l/100km – high MPG for my non-Australian friends), it often goes hand in hand with low performance. A better analogy is stickers on an appliance for energy efficiency How can we increase the efficiency of their architecture, without compromising other facets such as reliability, performance and operational overhead.This is what the aim of the game is There is a lot to cover, so in this multi-part blog series I am going to cover quite a lot in a hurry, many different domains and the objective here is to give you things to take away, and if you read this series and come away with one or two meaningful cost-saving ideas that you can actually execute in your environment, I’ll personally be exceptionally that you have driven cost out of your environment Yes, you should be spending less In this multi part series I we’re going to cover three main domains - Operational optimization, - Infrastructure optimization (Part 2) - Architectural optimizations (Part 2 – with a demo) With such a broad range of optimisations, hopefully something you read here will resonate with you. The idea here is to show you the money. I want to show you where the opportunities for savings exist Public cloud if full of new toys for us builders which is amazing (but it can be daunting). New levers for all of us and with the hyperscale providers releasing north of 2000 (5.x per day) updates per day, will all need to pay attention and climb this cloud maturity curve. Find those cost savings and invest in them, but the idea is to show you where you can find the money. I want to make this post easy to consume, because math and public cloud can be complex at times, services may have multiple pricing dimensions. My Disclaimer In this post I will using Microsoft Azure as my cloud platform of choice, but the principles apply to many of the major hyper scalers (Amazon Webservices and Google Cloud Platform) I will be using the Microsoft Azure region “Australia-East (Sydney)” because this is the region I most often use, and pricing will be in USD. Apply your own local region pricing but the concepts will be the same and the percentage savings will likely be almost exactly the same, but do remember, public cloud costs are often based on the cost of doing business in a specific geography. Lastly, prices are a moving target, this post is accurate as of April 2022. Cloud – A new dimension When we look through the lens of public cloud if has brought us all a new dimension of flexibility, so many more building blocks. How are you constructing them? When we as builders talk about architecture, we will often architect around a few dimensions, some more important than others, depending on your requirements. Commonly we will architect for availability, for performance, for security for function, but I would like to propose a new a new domain for architecture, and that is economy. When you’re building your systems, you need to look at the economy of your architecture because today in 2022, you have a great deal of control over it. New frameworks, new tools, new technologies, new hosting platform’s, new new new. What I mean by economy of architecture is as simple as this The same or better outcome for a lower cost I will show you how we can actually do this. I’m talking about the ability to trial and change the way a system is built during its own lifetime. As architects, developers, builders we need to move away from this model of heavy upfront design or some finger in the air predictions of what capacities a solution needs. But instead, embrace the idea of radical change during and application lifecycle funded by cost savings. Yes, there’s degrees of which you can do this depending on whether you built the system yourself or you’re using a COTS (Commercial-Off-The-Shelf-Software) but I will walk through options that you can apply to your existing stacks on what is possible. How Are You Keeping Score? Even with COTS, there are options. Have you noticed the appearance of new Lego blocks in the form of updates? Do you have a mechanism in place to be kept aware of updates? If you do, then thats great, but if you don’t, let me share with you how I do and have done this in the past. Here are two mechanisms I have used for both Azure (Into Microsoft Teams) and Amazon Web Services (Into Amazon Chime via a Python Lambda Function) but you could easily integrate to SLACK, email and beyond import boto3 import feedparser import json import decimal import requests import os import time from boto3.dynamodb.conditions import Key, Attr from botocore.exceptions import ClientError # Helper class to convert a DynamoDB item to JSON. class DecimalEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, decimal.Decimal): if o % 1 > 0: return float(o) else: return int(o) return super(DecimalEncoder, self).default(o) def process_entry(RSSEntry, table, webhook_url): try: print 'processing entry: ' + RSSEntry['id'] get_response = table.get_item( Key={ 'entryID': RSSEntry['id'] } ) except ClientError as e: print(e.response['Error']['Message']) else: try: result = get_response['Item'] print 'key found ' + RSSEntry['id'] + ' No further action' except KeyError: print 'no key found, creating entry' write_entry_to_hook(RSSEntry, webhook_url) put_response = table.put_item( Item={ 'entryID': RSSEntry['id'] }) print 'writing entry to dynamo: ' + RSSEntry['id'] + ' ' + RSSEntry['title'] def write_entry_to_hook(RSSEntry, webhook_url): print 'writing to webhook: ' + RSSEntry['id'] hook_data = {'Content': RSSEntry['title'] + ' ' + RSSEntry['link']} print hook_data url_response = requests.post(webhook_url, data=json.dumps(hook_data), headers={'Content-Type': 'application/json'}) print url_response time.sleep(2) def lambda_handler(event, context): webhook_url = os.environ['webhook_url'] rss_url = os.environ['rss_url'] dynamodb = boto3.resource("dynamodb", region_name='ap-southeast-2') table = dynamodb.Table('WhatsNewFeed') d = feedparser.parse(rss_url) for x in d['entries']: process_entry(x,table,webhook_url) #entryID = d['entries'][0]['id'] lambda_handler('test', 'test') The Basics Show me the money I say. I will show you the money, and as a bit of a teaser here are some of the percentage savings you will be able to make. I am not talking about 1 or 2%’ers, these are sizable chunks you can remove from your bill. I don’t want to frame this as a basic post, again I am about the builders. but then again there is the obvious stuff. This is the ‘Do Not Pass Go, Do Not Collect 200 Dollars’ advice I am talking about fundamentals in cloud and also wider to IT and Software Development You can’t improve what you can’t measure. So, I ask you all, do you know what your per transaction cost is? We are talking fundamentals in cloud here. What is your per transaction cost? Do you know what cost to serve is? If you do, well done, but if you don’t then how can you improve? Measuring The Cost To Serve Here is the first piece of wisdom. We need to figure this out. I will give you three different approaches - Beginner - Simply do it by hand sit, down with Cost Analytics/Explorer and figure out your transaction rates and do some rough calculations and either be pleasantly surprised or really shocked depending on what comes back you. - Intermediate - You gather these transaction volumes in real time from your systems. You may have instrumented using an APM (Application Performance Management) such as Azure Application Insights, New Relic, AWS X-Ray, Elastic APM but you still calculate this by hand - Advanced - You monitor in real time, but your calculations are also in real time. Leverage a platform such as Azure Event Hubs, Amazon Kinesis or Apache Kafka and derive this real time. When you have this information, you can ask the question. What’s my average transaction flow versus my average infrastructure cost? Then you can put it up in the corner and say, “hey developers, we need to optimise” This becomes your measure, and you need to make this relevant to your business stakeholders Operational Optimisation How are you paying for public cloud? Using a credit card in a PAYG (Pay As You Go) model might be great to get you started but for both Microsoft Azure and Amazon Web Services it can be an expensive way to pay. I am going to list a few bullet points for you to investigate - Enterprise Agreement (Azure) - Reserved Instances (Azure / AWS) - Savings Plan (AWS) - SPOT Instances (Azure / AWS) You need to move away from paying on demand because this is the most expensive way to leverage public cloud. These savings can range from 15% through to 90% in comparison to on-demand. Typically, discounts apply either for commitment, giving cloud providers certainty or in the case of SPOT, your ability to leverage idle resources. ‘Reserved Instances’ and ‘Savings Plans’, whilst not groundbreaking, they allow you to minimise the cost of traditional architectures. My next piece of wisdom is to have a ‘Reserved Instance / Savings Plan’ % target. Some of the best organisations I have seen in the past have had up to 80% of their IaaS resources covered by ‘Reserved Instances / Savings Plans’. If you don’t have a target, I suggest you look into this. But before you make a purchase understand your workload. Understand the ebbs and flows of what is baseline load. The rule of thumb is to assess a workload for 3 months, during the time right size accordingly. Leverage Azure Monitor / Amazon CloudWatch with a combination of Azure Advisor / AWS Trusted Advisor. Tune your application. Optimise The Humans – High Value vs. Low Value Operational optimization. This is an interesting one because, how much time does one really think about one’s labor cost. You hire people, they do ‘stuff’ for you I pay them they come in. The thing is, cloud practioners cost a lot of money To prove my point, I did a bit of an international investigation. What does it cost for a DBA (Database Administrator) per hour converted to US dollars around the world? This just the median DBA and none of us in here would ever work with just a median DBA, so we have established that people have a cost. But let’s think about what the actual meaning of this cost is. Let’s look through the lens of something DBA’s do so often, a minor database engine upgrade. This is important as we should be upgrading our databases on a regular basis (security, features, performance). But let’s look at the Azure MySQL / Amazon RDS which are both managed services for running relational databases in the cloud vs. running a database engine on IaaS. Managed services, whilst on paper may be more expensive the administrative cost of performing undifferentiated heavy lifting is a far greater cost. I am saving time and i am going to get receive logs and an audit trail that I can attach it my change record for auditability. You may say to me, well Shane we’re going spend that money anyway. I’ve got a high these people are not going away. I would say that’s great, but you could invest that particular chunk of time into something else of greater business value like maybe tuning your database (query plans, index optimization). This is a better use of a DBA time And with that, stay tuned to part 2 where we get real. Summary Public cloud brings a magnitude of opportunities to builders and architects. Are you looking for the pots of gold? They are there. Public cloud provides you a raft of new levers that you can pull, twist, pull to architect for the new world. Climb the cloud maturity curve and achieve the same or better outcome at a lower cost. Architectures can evolve, but it needs to make sense. What is the cost to change? Join me in part 2 as we get deeper into Infrastructure and Architectural optimisations you can make. Shane Baldacchino 2 thoughts on “Cost Optimisation In The Cloud – Practical Design Steps For Architects and Developers – Part 1” This is awesome content Shane. It has Lego, Grafana, MySQL – everything! Thanks for sharing this. Cant wait for follow up on this topic. Thanks Vikas, Apologies for the late reply, it got caught in the many many spam comments in WordPress. Thanks for the message and yes, I better cracking on part 2
https://automation.baldacchino.net/?p=1429
CC-MAIN-2022-33
refinedweb
2,199
61.16
Creating useful, reliable tests requires more than just recording sequences and playing them back. You can fill a test-suite with lots of sequences in a short time, but you are bound to lose track of what you've got sooner or later if you do not organize your tests in some logical structure. QF-Test provides you with a number of structure elements to achieve this. Before you start recording sequences and put them into a structure make sure that you The essential prerequisite of getting the components right has been discussed in chapter 5. Here we are going to concentrate on structuring the actual test-sets, test-cases, test-steps, sequences, events, checks, etc. QF-Test provides structure elements on different levels: The QF-Test files for saving the tests and components in the file directory. These can be bundled in projects. The 'Test-suite' has a set structure starting with the testing section that can hold any number of 'Test-set' nodes, which in turn can have any number of 'Test-case' nodes or more 'Test-sets'. Next comes the 'Procedures' section, where you can place any number of 'Procedure' nodes. QF-Test provides 'Package' nodes as structure element in this section. A package node can hold any number of procedure nodes or more package nodes. After that you will find the 'Extras' where you place any type of node and try out tests before moving them to the testing section. The last section, 'Windows and components', is reserved for the components referenced by the tests. QF-Test provides a number of structure elements for the tests themselves like 'Test-case' and 'Test-set' nodes as well as 'Setup' and 'Cleanup' nodes for setting up the preconditions for the tests, cleaning up after the test and error handling. The 'Test-set' and 'Test-case' nodes provide a small-scale, pragmatic form of test management right inside QF-Test. Their main feature is the smart dependency management described in 'Dependency' nodes that allows 'Test-cases' to be implemented completely independent from each other. With properly written 'Dependencies', cleanup of the SUT for previously executed tests is handled automatically along with the setup for the next test and all error handling. Conceptually a 'Test-case' node represents a single elementary test case. As such it is the main link between test planning, execution and result analysis. With the help of 'Dependencies', 'Test-cases' can be isolated from each other so that they can be run in any arbitrary order. QF-Test automatically takes care of the necessary test setup. Cleanup is also automatic and will be performed only when necessary in order to minimize overhead in the transition from one test to the next. This enables things like running subsets of functional test-suites as build tests or retesting only failed 'Test-cases'. 'Test-sets' basically are bundles of 'Test-cases' that belong together and typically have similar requirements for setup and cleanup. 'Test-sets' can be nested. The whole structure of 'Test-sets' and 'Test-cases' is very similar to 'Package' and 'Procedure' nodes. The 'Test-suite' root node can be considered a special kind of 'Test-set'. 'Test-suite', 'Test-set' and 'Test-case' nodes can be called from other places using a 'Test call' node. That way, tests that run only a subset of other tests can easily be created and managed. 'Test call' nodes are allowed everywhere, but should not be executed from within a 'Test-case' node because that would break the atomicity of a 'Test-case' from the report's point of view. A warning is issued if 'Test-case' execution is nested. As both 'Test-sets' and 'Test-case' can be called via a 'Test call' node they each have a set of default parameters similar to those of a 'Procedure'. These will be bound on the secondary variable stack and can be overridden in the 'Test call' node. A 'Test-case' has an additional set of variable bindings. These are direct bindings for the primary variable stack that will be defined during the execution of the 'Test-case' and cannot be overridden via a 'Test call' node or the command line parameter -variable <name>=<value>. Primary and secondary variable stack are described in section 7.1. The list of 'Characteristic variables' is a set of names of variables that are part of the characteristics of the test for data-driven testing. Each execution of the 'Test-case' with a different set of values for these variables is considered a separate test case. The expanded values of these variables are shown in the run-log and report for improved error analysis. Another useful attribute is the 'Condition' which is similar to the 'Condition' of an 'If' node. If the 'Condition' is not empty, the test will only be executed if the expression evaluates to true. Otherwise the test will be reported as skipped. Sometimes a 'Test-case' is expected to fail for a certain period of time e.g. when it is created prior to the implementation of the respective feature or before a bug-fix is available in the SUT. The 'Expected to fail if...' attribute allows marking such 'Test-cases' so they are counted separately and don't influence the percentage error statistics. The primary building block of a test are the 'Sequence' and 'Test-step' nodes which execute their child nodes one by one in the order as they appear. They are used to structure the child nodes of a 'Test-case'. The difference between 'Sequence' and 'Test-step' nodes is that 'Test-step' nodes will show up in the report whereas 'Sequences' will not. Since it is in the nature of testing that tests may fail from time to time it is crucial to have structure elements that will help you set up a defined initial state for a test. 'Setup' and 'Cleanup' nodes are for simple cases and are inserted as child nodes of 'Test-case' nodes. However, in most cases 'Dependency' nodes, that contain 'Setup' and 'Cleanup' nodes, will prove far more efficient. 'Test-case' nodes with well designed 'Setup' and 'Cleanup' nodes have the following properties important to successful testing: In the simplest case exactly the same initial condition is required by all the 'Test-case' nodes of a 'Test-set'. This can be implemented via the following structure: In the run-log you can see that for each 'Test-case' node first the 'Setup' node and then the 'Cleanup' node is run: In this simple example the cleanup is done in any case, even if the next test could be executed with the state the previous test left the SUT in. QF-Test provides a more comprehensive structure for setting up the SUT and handling cleanup much more efficiently, and even including error handling. This is explained in chapter section 9.6 in detail. In a way, writing good tests is a little like programming. After mastering the initial steps, tests and source code alike tend to proliferate. Things work fine until some building block that was taken for granted changes. Without a proper structure, programs as well as tests tend to collapse back upon themselves at this point as the effort of adapting them to the new situation is greater than the one needed for recreating them from scratch. The key to avoiding this kind of problem is reuse or avoidance of redundancy. Generating redundancy is one of the main dangers of relying too much on recording alone. To give an example, imagine you are recording various sequences to interact with the components in a dialog. To keep these sequences independent of each other, you start each one by opening the dialog and finish it by closing the dialog again. This is good thinking, but it creates redundancy because multiple copies of the events needed to open and close the dialog are contained in these sequences. Imagine what happens if the SUT changes in a way that invalidates these sequences. Let's say a little confirmation window is suddenly shown before the dialog is actually closed. Now you need to go through the whole suite, locate all of the sequences that close the dialog and change them accommodate the confirmation window. Pure horror. To stress the analogy again, this kind of programming style is called Spaghetti Programming and it leads to the same kind of maintenance problems. These can be avoided by collecting the identical pieces in one place and referring to them wherever they are needed. Then the modifications required to adapt to a change like the one described above are restricted to this place only. QF-Test comes with a set of nodes that help to achieve this kind of modularization, namely the 'Procedure', 'Procedure call' and 'Package' nodes. A 'Procedure' is similar to a 'Sequence' except that its 'Name' attribute is a handle by which a 'Procedure call' node can refer to it. When a 'Procedure call' is executed, the 'Procedure' it refers to is looked up and execution continues there. Once the last child node of the 'Procedure' has finished, the 'Procedure call' has completed as well. 'Packages' are just a way to give even more structure to 'Procedures'. A hierarchy of 'Packages' and 'Procedures', rooted at the special 'Procedures' node, is used to group sets of 'Procedures' with a common context together and to separate them from other 'Procedures' used in different areas. A 'Procedure' that always does exactly the same, no matter where it is called from, is only marginally useful. To expand on the above example, let's say we want to extend the 'Procedure' that opens the dialog to also set some initial values in some of its fields. Of course we don't want to have these initial values hard-coded in the 'Procedure' node, but want to specify them when we call the 'Procedure' to get different values in different contexts. To that end, parameters can be defined for the 'Procedure'. When the 'Procedure call' is executed, it specifies the actual values for these parameters during this run. How all of this works is explained in Variables. Also please take a look at the detailed explanation for the 'Procedure' and 'Procedure call' nodes for a better understanding of how these complement each other. A test-suite library with a set of commonly useful 'Procedures' is provided with QF-Test under the name qfs.qft. An entire chapter of the Tutorial is devoted to this library and section 23.1 explains how to include it in your test-suites. If you work with several test-suite libraries you might face a situation, where you define reusable test-steps or sequences, which you only want to use within a dedicated test-suite. If you want to create such local 'Procedures', you can put a '_' as first sign of the procedure's name. This marks a 'Procedure' as test-suite local. A call of a local 'Procedure' can only be inserted within the test-suite, where it is defined. You can use the same concept for local 'Packages'. If you call 'Procedures' from other 'Procedures', it could be convenient not to specify the full procedure name all the time. So called 'relative' procedure calls can only be added to a 'Package', which has the 'Border for relative calls' (see 'Border for relative calls') attribute specified. The structure of that call follows the concept below: As you can see each dot stands for one level. So calling a 'Procedure' two levels higher requires three dots (Current level also requires a dot.) As you should organize your tests in separate test steps, which are ideally the same like QF-Test's procedures, QF-Test offers several ways to insert those 'Procedure call' nodes: This approach is also valid for inserting 'Dependency reference' nodes, except the keyboard shortcut. You can create parameters for a 'Procedure', 'Dependency' or 'Test-case' automatically via the menu »Operations«-»Parameterize node«. The parameter details dialog allows you to define for which actions you want to create parameters, e.g. only text-inputs or check nodes. This transformation is very useful for developing procedures immediately after recording! Under 'Extras' you can convert a recorded 'Sequence' node into a 'Procedure' and move that to the 'Procedures' node. 3.1+ If you transform a 'Sequence' under 'Test-cases' QF-Test automatically creates a 'Procedure' node and inserts a 'Procedure call' to the previous location of the transformed node. Dependencies are a powerful and optimized concept for handling pre- and post-conditions. They are indispensable when running tests in the QF-Test Daemon mode mode. They basically work the following way: Test-cases as well as other dependencies can make use of 'Dependency' nodes placed in the 'Procedures' section via 'Dependency reference' nodes. Therefore, 'Setup' and 'Cleanup' nodes placed in a 'Dependency' node can be used by various test-cases - in contrast to those placed directly in 'Test-case' or 'Test-set' nodes. In order to understand the concept of 'Dependency' nodes it might be helpful to have a look at how a manual tester would proceed: He would do the setup for the first test case and then run it. In case of errors he may want to run special error cleanup routines. After that he would first check the requirements of the second test case. Only then would he do any cleanup. And he would only clean up as much as is necessary. Next he would check that the SUT still meets all preconditions required by the next test case and if not execute the necessary steps. In case the previous test case failed badly he might need to clean up the SUT completely before being able to set up the initial condition for the second test case. This is exactly what you can implement using QF-Test 'Dependencies'. 'Dependencies' give an answer to the disadvantages of the classical 'Setup' and 'Cleanup' nodes where 'Setup' nodes can only be nested by nesting test-sets and where 'Cleanup' nodes will be executed in any case, both of which is not very efficient. Moreover, 'Dependency' nodes provide structure elements for handling errors and exceptions. Quite a number of the provided sample test-suites make use of 'Dependencies', e.g.: doc/tutorialnamed dependencies.qft. You will find a detailed description in the tutorial in chapter 16. demo/carconfignamed carconfig_en.qft, showing a realistic example. swt_addressbook.qft, with an example for SWT users demo/eclipsenamed eclipse.qft, containing nested 'Dependencies'. datadriver.qftin doc/tutorialalso uses 'Dependencies'. Single-stepping through these suites in the debugger, looking at the variable bindings and examining the run-logs should help you to familiarize yourself with this feature. Please take care to store modified test-suites in a project-related folder. You can define 'Dependencies' in two places: One 'Dependency' should deal with one precondition. Then you can reduce the test overhead generated by cleanup activities. In case a 'Dependency' itself relies on preconditions these should be implemented in separate 'Dependency' nodes. 'Dependencies' can either be inherited from a parent node or referred to explicitly via 'Dependency reference' nodes. The implementation of the actual pre- and post-conditions is done in the 'Setup' and 'Cleanup' nodes of the 'Dependency'. In case a 'Test-set' or 'Test-case' node has a 'Dependency' node as well as 'Setup' and 'Cleanup' nodes the 'Dependency' will be executed first. 'Setup' and 'Cleanup' nodes have no influence on the dependency stack. The execution of a 'Dependency' has three phases: The examples used in this chapter all refer to tests with the following preconditions and cleanup activities: Before executing a 'Test-case' node QF-Test checks whether it has a 'Dependency' node of its own and/or inherits one from its parent nodes. In that case QF-Test checks whether the 'Dependency' node itself relies on other dependencies. Based on this analysis QF-Test generates a list of the dependencies required. This is done in step 1 of the example below. Next, QF-Test checks if previous tests have already executed dependencies. If so, QF-Test checks if it has to execute any 'Cleanup' nodes. After that QF-Test goes through all the setup nodes, starting with the most basic ones. The name of each 'Dependency' executed is noted down in a list called dependency stack. See step 2 of below example. Test of application module 1. First test-case to be executed. In the run-log you can see exactly what QF-Test did: After executing the test-case the application remains in the condition the last test-case left it in. Only after analyzing the dependencies of the next test-case 'Cleanup' nodes might be run and the respective 'Dependency' be deleted from the dependency stack. When 'Cleanup' nodes need to be run they are executed in reverse order to the 'Setup' nodes. After maybe clearing up dependencies no longer needed the 'Setup' nodes of all required 'Dependencies' are executed. Just like a manual tester will check that all requirements for the next test-case are fulfilled QF-Test will do the same. A manual tester may not be conscious of checking the basic requirements. However, if he notices that the last test-case left the application in a very bad state like a deadlock, he will probably kill the process if nothing else helped and start it again. To this end QF-Test explicitly runs all 'Setup' nodes. These should be implemented in a way that they first check if the application is already in the required state and just in case not run the whole 'Setup' node. 'Setups' nodes should first check if the required condition already exists before actually executing the node. 'Cleanup' nodes should first check if the requested cleanup action (e.g. closing a dialog) has already been performed. Also they should be programmed in such a way that they are in grade of clearing up error states of the application (e.g. error messages) so that a failed test-case will not affect the following ones. Test a dialog in application module 2 You can see in the run-log that the cleanup was done: Values of certain variables may determine whether a dependency has to be cleared up and the setup re-executed, like the user name for dependency B 'Login'. These variables are called 'Characteristic variables'. The values of the 'Characteristic variables' are always taken into account when comparing dependency stacks. Two 'Dependencies' on the stack are only considered identical if the values of all 'Characteristic variables' from the previous and the current run are equivalent. Consequently it is also possible for a 'Dependency' to directly or indirectly refer to the same base 'Dependency' with different values for its 'Characteristic variables'. In that case the base 'Dependency' will appear multiple times in the linearized dependency stack. Furthermore, QF-Test stores the values of the 'Characteristic variables' during execution of the 'Setup' of a 'Dependency'. When the 'Dependency' is rolled back, i.e. its 'Cleanup' node is executed, QF-Test will ensure that these variables are bound to the same value as during execution of the 'Setup'. This ensures that a completely unrelated 'Test-case' with conflicting variable definitions can be executed without interfering with the execution of the 'Cleanup' nodes during 'Dependency' rollback. Consider for example the commonly used "client" variable for the name of an SUT client. If a set of tests for one SUT has been run and the next test will need a different SUT with a different name, the "client" variable will be changed. However, the 'Cleanup' node for the previous SUT must still refer to the old value of "client", otherwise it wouldn't be able to terminate the SUT client. This is taken care of automatically as long as "client" was added to the list of 'Characteristic variables'. In the run-log you can see the values of the 'Characteristic variables' behind the respective 'Dependency': Other examples for 'Characteristic variables' are JDK versions when the SUT needs to be tested for various JDK versions or the browser name with web applications. In our example these would be specified as 'Characteristic variables' for 'Dependency' A. In some use cases it may be necessary to execute the 'Cleanup' node of a 'Dependency' after each 'Test-case'. Then you should set the attribute 'Forced cleanup'. If 'Forced cleanup' is activated for a 'Dependency' node on the list of dependencies the 'Cleanup' node of this and maybe of subsequent 'Dependencies' will be executed. In this example the test logic requires module 2 to be stopped after test execution. The attribute 'Forced cleanup' is activated for 'Dependency' D. In our example the 'Cleanup' nodes of 'Dependencies' E (close dialog) and D (stop modul) would be executed after each 'Test-case'. QF-Test rolls back 'Dependencies' depending on the needs of the 'Test-cases'. If you want to clear the list of dependencies explicitly there are two ways to do it: When a 'Test-case' does not use 'Dependencies' the list of dependencies remains untouched, i.e. no 'Cleanup' nodes are executed. Another thing that is just grand about 'Dependencies' is the convenient way that errors can be escalated without any additional effort. Let's again consider the example from the previous section after the first dependency stack has been initialized to A-B-C (Application started, user logged in, module one loaded) and the 'Setups' have been run. Now what happens if the SUT has a really bad fault, like going into a deadlock and not reacting to user input any longer? When a 'Cleanup' node fails during rollback of the dependencies stack, QF-Test will roll back an additional 'Dependency' and another one if that fails again and so on until the stack has been cleared. Similarly, if one of the 'Setups' fails, an additional 'Dependency' is rolled back and the execution of the 'Setups' started from scratch. For this to work it is very important to write 'Cleanup' sequences in a way that ensures that either the desired state is reached or that an exception is thrown and that there is a more basic dependency with a more encompassing 'Cleanup'. For example, if the 'Cleanup' node for the SUT 'Dependency' just tries to cleanly shut down the SUT through its File->Exit menu without exception handling and further safeguards, an exception in that sequence will prevent the SUT from being terminated and possibly interfere with all subsequent tests. Instead, the shutdown should be wrapped in a Try/Catch with a Finally node that checks that the SUT is really dead and if not, kills the process as a last resort. With good error handling in place, 'Test-cases' will rarely interfere with each other even in case of really bad errors. This helps avoid losing a whole night's worth of test-runs just because of a single error. Besides supporting automatic escalation of errors a 'Dependency' can also act as an error or exception handler for the tests that depend on it. 'Catch' nodes, which can be placed at the end of a 'Dependency', are used to catch and handle exceptions thrown during a test. Exceptions thus caught will still be reported as exceptions in the run-log and the report, but they will not interfere with subsequent tests or even abort the whole test-run. An 'Error handler' node is another special node that may be added to a 'Dependency' after the 'Cleanup' and before the 'Catch' nodes. It will be executed whenever the result of a 'Test-case' is "error". In case of an exception, the 'Error handler' node is not executed automatically because that might only cause more problems and even interfere with the exception handling, depending on the kind of exception. To do similar things for errors and exception, implement the actual handler as a 'Procedure' and call it from the 'Error handler' and the 'Catch' node. 'Error handlers' are useful for capturing and saving miscellaneous states that are not automatically provided by QF-Test. For example, you may want to create copies of temporary files created during execution of your SUT that may hold information pertaining to the error. Only the topmost 'Error handler' that is found on the dependency stack is executed, i.e. if in a dependency stack of [A,B,C,D] both A and C have 'Error handlers', only C's 'Error handler' is run. Otherwise it would be difficult to modify the error handling of the more basic 'Dependency' A in the more specialized 'Dependency' C. To reuse A's error handling code in C, implement it as a 'Procedure'. Note You might be interested in reading this section in case you want to run several SUTs at the same time where you do not want the 'Dependency' node for a test on one of the SUTs to trigger cleanup actions for another SUT. Otherwise feel free to skip it. A typical use case would be the test of whole process chains over several applications. Consider the following situation: Sales representatives enter data for offers via a web application into a database at headquarters. There, the offers will be completed, printed and posted. A copy of each printed offer will be saved in a document management system (DMS). In above example two sales representatives (UserA and UserB) enter offers and two different persons (UserC and UserD) process the offers at headquarters. Then the offers will be checked in the document management system. Since you do not want the dependencies of the test-cases to interfere with one another you need to add a suitable name in the 'Dependency namespace' attribute of each 'Dependency reference' node. After running the test-set you can see in the run-log that a dependencies stack was set up in the name space 'data entry' for the first test-case: A dependencies stack is set up in the name space 'database' for the second test-case. The dependencies stack in the name space 'data entry' remains unheeded. Looking at the applications, this means the database is started whereas the application for data entry is left as it is. A dependencies stack is set up in the name space 'DMS' for the third test-case. The dependencies stacks in the name spaces 'data entry' and 'database' remain unheeded. Looking at the applications, this means the document management system is started whereas the other two applications are left as they are. In test-case number four the required dependencies are checked against the ones on the dependencies stack in the name space 'data entry' of the first test-case. The dependencies stacks in the other two name spaces remain unheeded. Looking at the applications, this means User A is logged off, User B is logged into the data entry application and the other two applications are left as they are. In test-case number five the required dependencies are checked against the ones on the dependencies stack in the name space 'database' of the second test-case. The dependencies stacks in the other two name spaces remain unheeded. Looking at the applications, this means User C is logged off, User D is logged into the database application and again the other two applications are left as they are. In the last test-case the required dependencies are checked against the ones on the dependencies stack in the name space 'DMS' of the third test-case. The dependencies stacks in the other two name spaces remain unheeded. Looking at the applications, this means no clean up action has to be done on the DMS. The other two applications are left as they are, anyway. Like with any programming-related task it is important for successful test-automation to properly document your efforts. Otherwise there is a good chance (some might say a certainty) that you will lose the overview over what you have done so far and start re-implementing things or miss out tests that should have been automated. Proper documentation will be invaluable when working through a run-log, trying to understand the cause of a failed test. It will also greatly improve the readability of test reports. Based on the 'Comment' attributes of 'Test-set', 'Test-case', 'Package' and 'Procedure' nodes, QF-Test can create a set of comprehensive HTML documents that will make all required information readily available. The various kinds of documents and the methods to create them are explained in detail in chapter 21.
https://www.qfs.de/fr/qf-test-manuel/lc/manual-en-user_organization.html
CC-MAIN-2021-10
refinedweb
4,756
51.89
Learn more about these different git repos. Other Git URLs 555e647 @@ -4227,8 +4227,28 @@ def get_next_release(build_info): - """find the last successful or deleted build of this N-V. If building is - specified, skip also builds in progress""" + """ + Find the next release for a package's version. + + This method searches the latest building, successful, or deleted build and + returns the "next" release value for that version. + Examples: + None becomes "1" + "123" becomes "124" + "123.el8" becomes "124.el8" + "123.snapshot.456" becomes "123.snapshot.457" + All other formats will raise koji.BuildError. + :param dict build_info: a dict with two keys: a package "name" and + "version" of the builds to search. For example, + {"name": "bash", "version": "4.4.19"} + :returns: a release string for this package, for example "15.el8". + :raises: BuildError if the latest build uses a release value that Koji + does not know how to increment. values = { 'name': build_info['name'], 'version': build_info['version'], Update the getNextRelease RPC docstring to describe what this method does and what the parameter and return values are. getNextRelease I would also expand it a bit on expected release formats. Metadata Update from @tkopecek: - Pull-request tagged with: doc issue #2708 Possible addition: rebased onto 555e647 Thanks! Those regexes are difficult for me to read at a glance. I updated this PR with your human-readable example values. This should make maintenance easier because we won't have to copy-and-paste regexes between the code and the doc, and interested users can read the hub's source code to find out more about the implementation for now. :thumbsup: Commit 808d3ee fixes this pull-request Pull-Request has been merged by tkopecek Update the getNextReleaseRPC docstring to describe what this method does and what the parameter and return values are.
https://pagure.io/koji/pull-request/2706
CC-MAIN-2022-21
refinedweb
295
65.52
Persistent Homology (Part 2) The time has come for us to finally start coding. Generally my posts are very practical and involve coding right away, but topological data analysis can't be simplified very much, one really must understand the underlying mathematics to make any progress. We're going to learn how to build a VR complex from simulated data that we sample from a circle (naturally) embedded in $\mathbb R^2$. So we're going to randomly sample points from this shape and pretend it's our raw point cloud data. Many real data are generated by cyclical processes, so it's not an unrealistic exercise. Using our point cloud data, we will build a Vietoris-Rips simplicial complex as described (in math terms) above. Then we'll have to develop some more mathematics to determine the homology groups of the complex. Recall the parametric form of generating the point set for a circle is as follows: $x=a+r\cos(\theta),$ $y=b+r\sin(\theta)$ where $(a,b)$ is the center point of the circle, $\theta$ is a parameter from $0 \text{ to } 2\pi$, and $r$ is the radius. The following code will generate the discrete points of sampled circle and graph it.() Okay, let's stochastically sample from this (somewhat) perfect circle, basically add some jitteriness. x2 = np.random.uniform(-0.75,0.75,n) + x #add some "jitteriness" to the points y2 = np.random.uniform(-0.75,0.75,n) + y fig, ax = plt.subplots() ax.scatter(x2,y2) plt.show() As you can tell, the generated points look "circular" as in there is a clear loop with a hole, so we want our simplicial complex to capture that property. Let's break down the construction of the VR complex into digestable steps: - Define a distance function $d(a,b) = \sqrt{(a_1-b_1)^2+(a_2-b_2)^2}$ (Euclidian distance metric) - Establish the $\epsilon$ parameter for constructing a VR complex - Create a collection (python list, closest thing to a mathematical set) of the point cloud data, which will be the 0-simplices of the complex. - Scan through each pair of points, calculate the distance between the points. If the pairwise distance between points is $< \epsilon$, we add an edge between those points. This will generate a 1-complex (a graph). - Once we've calculated all pairwise distances and have a (undirected) graph, we can iterate through each vertex, identify its neighbors (points to which it has an egde) and attempt to build higher-dimensional simplices incrementally (e.g. from our 1-complex (graph), add all 2-simplices, then add all 3-simplices, etc) There are many algorithms for creating a simplicial complex from data (and there are many other types of simplicial complexes besides the vietoris-rips complex). Unfortunately, to my knowledge, there are no polynomial-time algorithms for creating a full (not downsampled) simplicial complex from point data. So no matter what, once we start dealing with really big data sets, building the complex will become computationally expensive (even prohibitive). A lot more work needs to be done in this area. We will be using the algorithm as described in "Fast Construction of the Vietoris-Rips Complex" by Afra Zomorodian. This algorithm operates in two major steps. Construct the neighborhood graph of point set data. The neighborhood graph is an undirected weighted graph $(G,w)$ where $G = (V,E), V$ is the node/vertex set and $E$ is the edge set, and $w : E \rightarrow \mathbb R$ ($w$ is a function mapping each edge in $E$ to a real number, it's weight). Recall our edges are created by connecting points that are within some defined distance of each other (given by a parameter $\epsilon$). Specifically, $$E_{\epsilon} = \{\{u,v\} \mid d(u,v) \leq \epsilon, u \neq v \in V\}$$ where $d(u,v)$ is the metric/distance function for two points $u,v \in V$. And the weight function simply assigns each edge a weight which is equal to the distance between the pair of points in the edge. That is, $w(\{u,v\}) = d(u,v), \forall\{u,v\} \in E_{\epsilon}(V)$ Peform a Vietoris-Rips expansion on the neighborhood graph from step 1. Given a neighborhood graph $(G,w)$, the weight-filtered (will explain this soon) Vietoris-Rips complex $(R(G), w)$ (where $R$ is VR complex) is given by: $$R(G) = V \cup E \cup \{ \sigma \mid \left ({\sigma}\above 0pt {2} \right ) \subseteq E \} , $$ For $\sigma \in R(G) \\$, $$ w(\sigma) = \left\{ \begin{array}{ll} 0, & \sigma = \{v\},v \in V, \\ w(\{u,v\}), & \sigma = \{u,v\} \in E \\ \displaystyle \operatorname*{max}_{\rm \tau \ \subset \ \sigma} w(\tau), & otherwise. \end{array} \right\} $$ Okay what does that mean? Well, in this simple example, we want to get from our neighborhood graph (left) to our Vietoris-Rips complex (right): So the math above is saying that our Vietoris-Rips complex is the set that is the union of all the vertices and edges in our neighborhood graph (which takes us to a 1-complex), and the union of all simplices $\sigma$ (remember $\sigma$ is just a set of vertices) where each possible combination of 2 vertices in $\sigma$ is in $E$ (hence the $\left ({\sigma}\above 0pt {2} \right ) \subseteq E$ part). The next part defines the weight function for each simplex in our VR complex, from individual 0-simplices (vertices) to the highest dimensional simplex. If the simplex is a 0-simplex (just a vertex), then the weight of that simplex is 0. If the simplex is a 1-simplex (an edge), then the weight is the distance (defined by our distance function) between those two vertices in teh edge. If the simplex is a higher-dimensional simplex, like a 2-simplex (triangle), then the weight is the weight of the longest edge in that simplex. Before we get to computing the VR complex for our "circle" data from earlier, let's just do a sanity check with the simple simplex shown above. We'll embed the vertices in $\mathbb R^2$ and then attempt to build the neighborhood graph first. raw_data = np.array([[0,2],[2,2],[1,0],[1.5,-3.0]]) #embedded 3 vertices in R^2 plt.axis([-1,3,-4,3]) plt.scatter(raw_data[:,0],raw_data[:,1]) #plotting just for clarity for i, txt in enumerate(raw_data): plt.annotate(i, (raw_data[i][0]+0.05, raw_data[i][1])) #add labels plt.show() We'll be representing each vertex in our simplicial complex by the index number in the original data array. For example, the point [0,2] shows up first in our data array, so we reference it in our simplicial complex as simply point [0]. #Build neighorbood graph nodes = [x for x in range(raw_data.shape[0])] #initialize node set, reference indices from original data array edges = [] #initialize empty edge array weights = [] #initialize weight array, stores the weight (which in this case is the distance) for each edge eps = 3.1 #epsilon distance parameter = np.linalg.norm(a - b) #euclidian distance metric if dist <= eps: edges.append({i,j+i}) #add edge weights.append([len(edges)-1,dist]) #store index and weight print("Nodes: " , nodes) print("Edges: " , edges) print("Weights: ", weights) Nodes: [0, 1, 2, 3] Edges: [{0, 1}, {0, 2}, {1, 2}, {2, 3}] Weights: [[0, 2.0], [1, 2.2360679774997898], [2, 2.2360679774997898], [3, 3.0413812651491097]] Perfect. Now we have a node set, edge set, and a weights set that all constitute our neighborhood graph (G,$w$). Our next task is to use the neighborhood graph to start building up the higher-dimensional simplices. In this case we'll only have one additional 2-simplex (triangle). We'll need to setup a some basic functions. def lower_nbrs(nodeSet, edgeSet, node): return {x for x in nodeSet if {x,node} in edgeSet and node > x} def rips(nodes, edges, k): VRcomplex = [{n} for n in nodes] for e in edges: #add 1-simplices (edges) VRcomplex.append(e) for i in range(k): for simplex in [x for x in VRcomplex if len(x)==i+2]: #skip 0-simplices #for each u in simplex nbrs = set.intersection(*[lower_nbrs(nodes, edges, z) for z in simplex]) for nbr in nbrs: VRcomplex.append(set.union(simplex,{nbr})) return VRcomplex Great, let's try it out and see if it works. We're explicitly telling it to find all simplicies up to 3-dimensions. theComplex = rips(nodes, edges, 3) theComplex [{0}, {1}, {2}, {3}, {0, 1}, {0, 2}, {1, 2}, {2, 3}, {0, 1, 2}] Awesome, looks perfect. Now we want to see what it looks like. I've produced some code that will graph the simplicial complex based on the output from our Vietoris-Rips algorithm from above. This is not crucial to understanding TDA (most of the time we don't try to visualize simplicial complexes as they are too high-dimensional) so I will not attempt to explain the code for graphing. plt.clf() plt.axis([-1,3,-4,3]) plt.scatter(raw_data[:,0],raw_data[:,1]) #plotting just for clarity for i, txt in enumerate(raw_data): plt.annotate(i, (raw_data[i][0]+0.05, raw_data[i][1])) #add labels #add lines for edges for edge in [e for e in theComplex if len(e)==2]: pt1,pt2 = [raw_data[pt] for pt in [n for n in edge]] print(pt1,pt2) line = plt.Polygon([pt1,pt2], closed=None, fill=None, edgecolor='r') plt.gca().add_line(line) #add triangles for triangle in [t for t in theComplex if len(t)==3]: pt1,pt2,pt3 = [raw_data[pt] for pt in [n for n in triangle]] line = plt.Polygon([pt1,pt2,pt3], closed=False, color="blue",alpha=0.3, fill=True, edgecolor=None) plt.gca().add_line(line) plt.show() [ 0. 2.] [ 2. 2.] [ 0. 2.] [ 1. 0.] [ 2. 2.] [ 1. 0.] [ 1. 0.] [ 1.5 -3. ] Now we have a nice little depiction of our very simple VR complex. Now that we know what to do. We need to learn about simplicial homology, which is the study of topological invariants between simplicial complexes. In particular, we're interested in being able to mathematically identify n-dimensional connected components, holes and loops. To aid in this effort, I've repackage the code we've used above as a separate file so we can just import it and use the functions conveniently on our data. You can download the latest code here: < > Here I will zip our $x$ and $y$ coordinates from the (jittered) points we sampled from a circle so we can use it to build a more complicated simplicial complex. newData = np.array(list(zip(x2,y2))) import SimplicialComplex graph = SimplicialComplex.buildGraph(raw_data=newData, epsilon=3.0) ripsComplex = SimplicialComplex.rips(nodes=graph[0], edges=graph[1], k=3) SimplicialComplex.drawComplex(origData=newData, ripsComplex=ripsComplex) That's neat! Clearly we have reproduced the circular space from which the points were sampled. Notice that there are 1-simplices and higher-dimensional simplices (the darker blue sections) but it forms a single connected component with a single 1-dimensional hole. #This is what it looks like if we decrease the Epsilon parameter too much: Homology Groups¶ Now that we know what simplicial complexes are and how to generate them on raw point data, we need to get to the next step of actually calculating the interesting topological features of these simplicial complexes. Topologicla data analysis in the form of computational homology gives us a way of identifying the number of components and the number of n-dimensional "holes" (e.g. the hole in the middle of a circle) in some topological space (generally a simplicial complex) that we create based on a data set. Before we proceed, I want to describe an extra property we can impose on the simplicial complexes we've been using thus far. We can give them an orientation property. An oriented simplex $\sigma = {u_1, u_2, u_3, ... u_n}$ is defined by the order of its vertices. Thus the oriented simplex {a,b,c} is not the same as the oriented simplex {b,a,c}. We can depict this by making our edges into arrows when drawing low-dimensional simplicial complexes. Now, strictly speaking a mathematical set (designated with curcly braces ${}$) is by definition an unordered collection of objects, so in order to impose an orientation on our simplex, we would need add some additional mathematical structure e.g. via making the set of vertices an ordered set by adding a binary $\leq$ relation on the elements. This isn't particularly worth delving into, we'll just henceforth presume that the vertex sets are ordered without explicitly declaring the additional structure necessary to precisely define that order. Looking back at the above two oriented simplices, we can see that the directionality of the arrows is exactly reverse for each simplex. If we call the left simplex $\sigma_1$ and the right $\sigma_2$ then we would say that $\sigma_1 = -\sigma_2$. The reason for bringing in orientation will be made clear later. n-Chains¶ Remember that a simplicial complex contains all faces of each highest-dimensional simplex in the complex. That is to say, if we have a 2-complex (a simplicial complex with the highest dimensional simplex being a 2-simplex (triangle)), then the complex also contains all of its lower dimensional faces (e.g. edges and vertices). Let $\mathcal C = \text{{{0}, {1}, {2}, {3}, {0, 1}, {0, 2}, {1, 2}, {2, 3}, {0, 1, 2}}}$ be the simplicial complex constructed from a point cloud (e.g. data set), $X = \{0,1,2,3\}$. $\mathcal C$ is a 2-complex since its highest-dimensional simplex is a 2-simplex (triangle). We can break this complex up into groups of subsets of this complex where each group is composed of the set of all $k$-simplices. In simplicial homology theory, these groups are called chain groups, and any particular group is the k-th chain group, $C_k(X)$. For example, the 1st-chain group of $\mathcal C$ is $\mathcal C_1(X) = \text{ {{0,1},{0,2},{1,2},{2,3}} }$ Basic Abstract Algebra¶ The "group" in "chain group" actually has a specific mathematical meaning that warrants covering. The concept of a group is a notion from abstract algebra, the field of mathematics that generalizes some of the familiar topics from your high school algebra classes. Needless to say, it is fairly abstract, but I will do my best to start with concrete examples that are easy to conceptualize, then gently abstract away until we get to the most general notions. I'm going to be covering groups, rings, fields, modules, and vector spaces and various other minor topics as they arise. Once we get this stuff down, we'll return to our discussion of chain groups. Basically my only requirement of you, the reader, is that you already have an understanding of basic set theory. So if you've been lying to me this whole time and some how understood what's going on so far, then stop and learn some set theory because you're going to need it. Groups¶ The mathematical structure known as a group can be thought of as generalizing a notion of symmetry. There's a rich body of mathematics that study groups known as (unsurprisingly) group theory. We won't go very far in our brief study of groups here, as we only need to know what we need to know. For our purposes, a group is a mathematical object that has some symmetrical properties to it. It might be easiest to think in terms of geometry, but as we will see, groups are so general that many different mathematical structures can benefit from a group theory perspective. Just by visual inspection, we can see a few of the possible operations we can perform on this triangle that will not alter its structure. I've drawn lines of symmetry showing that you can reflect across these 3 lines and still end up with the same triangle structure. More trivially, you can translate the triangle on the plane and still have the same structure. You can also rotate the triangle by 120 degrees and it still preserves the structure of the triangle. Group theory offers precise tools for managing these types of operations and their results. Here's the mathematical definition of a group. A group is a set, $G$, together with a binary operation $\star$ (or whatever symbol you like) that maps any two elements $a,b \in G$ to another element $c \in G$, notated as $a\star b = c, \text{for all } a,b,c \in G$. The set and its operation are notated as the ordered pair $(G, \star)$. Additionally, to be a valid group, the set and its operation must satisfy the following axioms (rules): - Associativity For all $\text{a, b and c in G}, (a \star b) \star c = a \star (b \star c)$. - Identity element There exists an element $e \in G$ such that, for every element $a \in G$, the equation $e \star a = a \star e = a$ holds. Such an element is unique and is called the identity element. - Inverse element For each $a \in G$, there exists an element $b \in G$, commonly denoted $a^{-1}$ (or $−a$, if the operation is denoted "$+$"), such that $a \star b = b \star a = e$, where $e$ is the identity element. (Adapted from wikipedia) NOTE: Notice that the operation $\star$ is not necessarily commutative, that is, $a \star b {?\above 0pt =} b \star a$. The order of operation may matter. If it does not matter, it is called a commutative or abelian group. The set $\mathbb Z$ (the integers) is an abelian group since e.g. $1+2 = 2+1$. This "group" concept seems arbitrary and begs the question of what its use is, but hopefully that will become clear. Keep in mind all mathematical objects are simply sets with some (seemingly arbitrary) axioms (basically rules the sets must obey that define a structure on those sets). You can define whatever structure you want on sets (as long as they're logically consistent and coherent rules) and you'll have some mathematical object/structure. Some structures are more interesting than others. Some are sets have a lot of structure (i.e. a lot of rules) and others will have very few. Typically the structures with a lot of rules are merely specializations of more general/abstract structures. Groups are just mathematical structures (sets with rules that someone made up) that have interesting properties and turn out to be useful in a lot of areas. But since they are so general, it is a bit difficult to reason about them concretely. Let's see if we can "group-ify" our triangle example from above. We can consider the triangle to be a set of labeled vertices, as if it were a 2-simplex. Since we've labeled the vertices of the triangle, we can easily describe it as the set $$t = \{a, b, c\}$$ But how do we define a binary operation on $t$? I'm not sure, let's just try things out. We'll build a table that shows us what happens when we "operate" on two elements in $t$. I'm seriously just going to make up a binary operation (a map from $(a,b) \mid a,b \in t$ ) and see if it turns out to be a valid group. Here it is. So to figure out what $a \star b$ is, you start from the top row, find $a$, then locate $b$ in the vertical left column, and where they meet up gives you the result. In my made up example, $a \star b = a$. Note that I've defined this operation to be NON-commutative, thus $a \star b \neq b \star a$. You have to start from the top row and then go to the left side row (in that order). Now you should be able to quickly tell that this is in fact not a valid group as it violates the axioms of groups. For example, check the element $b \in G$, you'll notice there is no identity element, $e$, for which $b + e = b$. So let's try again. This time I've actually tried to make a valid group. You should check for yourself that this is in fact a valid group, and this time this group is commutative, therefore we call it an abelian group. The identity element is $a$ since $a$ added to any other element $b$ or $c$ just gives $b$ or $c$ back unchanged. Notice that the table itself looks like it has some symmetry just by visual inspection. It turns out that finite groups, just like finite topological spaces, can be represented as directed graphs, which aids in visualization (aren't the patterns in math beautiful?). These graphs of groups have a special name: Cayley graphs. It's a little more complicated to construct a Cayley graph than it was to make digraphs for topological spaces. We have to add another property to Cayley graphs besides just having directed arrows (edges), we also assign an operation to each arrow. Thus if an arrow is drawn from $a \rightarrow b$ then that arrow represents the group operation on $a$ that produces $b$. And not all arrows are going to be the same operation, so to aid in visualization, we typically make each type of operation associated with an arrow a different color. Before we construct a Cayley graph, we need to understand what a generating set of a group is. Remember, a group is a set $G$ with a binary operation $\star$ (or whatever symbol you want to use), $(G, \star)$. A generating set is a subset $S \subseteq G$ such that $G = \{a \star b \mid a,b \in S\}$. In words, it means that the generating set $S$ is a subset of $G$ but if we apply our binary operation $\star$ on the elements in $S$, possibly repeatedly, it will produce the full set $G$. It's almost like $S$ compresses $G$. There may be many possible generators. So what is/are the generator(s) for our set $t = \{a,b,c\}$ with $\star$ defined in the table above? Well, look at the subsection of the operation table I've highlited red. You'll notice I've highlighted the subset $\{b,c\}$ because these two elements can generate the full set $\{a,b,c\}$. But actually just $\{b\}$ and $\{c\}$ individually can generate the full set. For example, $b\star b=c$ and $b \star b \star b = a$ (we can also write $b^2 = c$ and $b^3 = a$). Similarly, $c \star c = b$ and $c \star c \star = a$. So by repeatedly applying the $\star$ operation on just $b$ or $c$ we can generate all 3 elements of the full set. Since $a$ is the identity element of the set, it is not a generator as $a^n = a, n \in \mathbb N$ ($a$ to any positive power is still $a$). Since there are two possible generators, $b$ and $c$, there will be two different "types" of arrows, representing two different operations. Namely, we'll have a "$b$" arrow and a "$c$" arrow (representing the $\star b \text{ and } \star c$ operations). To build the edge set $E$ for a Cayley graph of a group $(G, \star)$ and generator set $S \subseteq G$, is the edge set $$E = \{(a,c) \mid c = a\star b \land a,c \in G \land b \in S\}$$ where each edge is colored/labeled by $b \in S$. The resulting Cayley graph is: In this Cayley graph we've drawn two types of arrows for the generators {b} and {c}, however, we really only need to choose one since only one element is necessary to generate the full group. So in general we choose the smallest generator set to draw the Cayley graph, in this case then we'd only have the red arrow. So this group is the group of rotational symmetries of the equilateral triangle because we can rotate the triangle 120 degrees without changing it and our group codifies that by saying each turn of 120s is like the group operation of "adding" ($\star$) the generator element $b$. We can also add the identity element, which is like deciding not to rotate it at all. Here we can see how "adding" {b} to each element in the original set {a,b,c} looks like rotating counter-clockwise by 120 degrees: This is also called the cyclic group of order 3 which is isomorphic to $\mathbb Z_3$. Woah, isomorphic? $\mathbb Z_3$? What's all of that you ask? Well isomorphic basically means there exists a one-to-one (bijective) mapping between two mathematical structures that maintains the structure. It's like they're the same structure but with different labelings. The rotational symmetry group of the triangle we just studied is isomorphic to the integers modulo 3 ( $\mathbb Z_3$ ). Modular arithmetic means that at some point the operation loops back to the beginning. Unlike the full integers $\mathbb Z$ where if you keep adding 1 you'll keep getting a bigger number, in modular arithmetic, eventually you add 1 and you'll loop back to the starting element (the identity element 0). Consider the hour hand on a clock, it is basically the integers modulo 12 ($\mathbb Z_{12}$) since if you keep adding one hour it eventually just loops back around. Here's the addition table for the integers modulo 3: Hence $1+1 = 2$ but $2+2 = 1$ and $1+2=0$ in $\mathbb Z_3$. The integers modulo $x$ forms a cyclic group (with a single generator) with $x$ elements and $0$ being the identity element. Okay so that's the basics of groups, let's move on to rings and fields. Rings and Fields¶ So now we move on to learning a bit about rings and then fields. To preface, fields and rings are essentially specializations of groups, i.e. they are sets with the rules of groups plus additional rules. Every ring is a group, and every field is a ring. Definition (Ring) A ring is a set $R$ equipped with two binary operations $\star$ and $\bullet$ (or whatever symbols you want to use) satisfying the following three sets of axioms, called the ring axioms: - $R$ is an abelian (commutative) group over the $\star$ operation. Meaning that $(R, \star)$ satisfies the axioms for being a group. - $(R, \bullet)$ forms a mathematical structure called a monoid when the $\bullet$ operation is associative < i.e. $a\bullet (b\bullet c) = (a \bullet b) \bullet c$ and $(R, \bullet)$ has an identity element (i.e. $\exists e \in R$ such that $e \bullet b = b \bullet e = e$ ) - $\star$ is distributive with respect to $\bullet$, i.e. $a \bullet (b \star c) = (a \bullet b) \star (a \bullet c)$ for all $a, b, c \in R$ (left distributivity). $(b \star c) \bullet a = (b \bullet a) \star (c \bullet a)$ for all $a, b, c \in R$ (right distributivity). (Adapted from Wikipedia) The most familiar ring is the integers, $\mathbb Z$, with the familiar operations $+$ (addition) and $\times$ (multiplication). Since a ring is also a group, we can speak of generators for the group of integers. Since the integers span from $\{-n...-3, -2, -1, 0, 1, 2, 3...n\}$ there are only two generators for the integers, namely $\{-1,1\}$ under the addition operation ($+$), since we can repeatedly do $1+1+1+...+n$ to get all the positive integers and $-1+-1+-1+...-n$ to get all the negative integers and $-1+1=0$ to get 0. And here is the definition of a field. Definition (Field) A field is a set $F$ with two binary operations $\star$ and $\bullet$, denoted $F(\star, \bullet)$, that satisfy the following axioms. ...for all $a,b,c \in F$, where $0$ is the symbol for the identity element under the operation $\star$ and $1$ is the symbol for the identity element under the operation for $\bullet$. Clearly, a field has a lot more requirements than just a group. And just to note, I know I've been using the symbols $\star$ and $\bullet$ for the binary operations of a group, ring and field, but these are more commonly denoted as $+$ and $\times$, called addition and multiplication, respectively. The only reason why I didn't initially use those symbols was because I wanted to emphasize the point that these do not just apply to numbers like you're familiar with, but are abstract operations that can function over any mathematical structures that meet the requirements. But now that you understand that, we can just use the more familiar symbols. So $\star = +$ (addition) and $\bullet = \times$ (multiplication) and $a \div b = a \times b^{-1}$ is division. Remember the integers $\mathbb Z$ is the most familiar ring with the operations additon and multiplication? Well the integers do not form a field because there is not an inverse for each element in $\mathbb Z$ with respect to $\times$ operation. For example, if $\mathbb Z$ was a field then $5 \times 5^{-1} = 1$, however, $5^{-1}$ is not defined in the integers. If we consider the real numbers $\mathbb R$, then of course $5^{-1} = 1/5$. Thus a field, while defined just in terms of addition ($+$) and multiplication ($\times$), implicitly defines the inverses of those operations, namely substraction ($-$) and division ($/$). So for a set to be a field, the division operation (inverse of multiplication) must be defined for every element in the set except for the identity element under the addition operation ($0$ in the case of $\mathbb Z$); as you know from elementary arithmetic that one cannot divide by 0 (since there is no inverse of $0$). And it all has to do with symmetry. The inverse of $1$ is $-1$ under addition, and $-2$ is the inverse of $2$ and so on. Notice the symmetry of inverses? Each inverse is equidistant from the "center" of the set, that being $0$. But since $0$ is the center, there is no symmetrical opposite of it, thus $0$ has no inverse and cannot be defined with respect to division. ... So stepping back a bit, group theory is all about studying symmetry. Any mathematical objects that have symmetrical features can be codified as groups and then studied algebraically to determine what operations can be done on those groups that preserve the symmetries. If we don't care about symmetry and we just want to study sets with a binary operation and associativity, then we're working with monoids. Why are we learning about groups, rings, and fields?¶ Ok, so we've learned the basics of groups, rings and fields, but why? Well I've already alluded that we'll need to understand groups to understand Chain groups which are needed to calculate the homology of simplicial complexes. But more generally, groups, rings and fields allow us to use the familiar tools of high school algebra on ANY mathematical objects that meet the relatively relaxed requirements of groups/rings/fields (not just numbers). So we can add, substract (groups), multiply (rings) and divide (fields) with mathematical objects like (gasp) simplicial complexes. Moreover, we can solve equations with unknown variables involving abstract mathematical objects that are not numbers. Modules and Vector Spaces¶ Okay so there's a couple other mathematical structures from abstract algebra we need to study in order to be prepared for the rest of persistent homology, namely modules and vector spaces, which are very similar. Let's start with vector spaces since you should already be familiar with vectors. You should be familiar with vectors because generally we represent data as vectors, i.e., if we have an excel file with rows and columns, each row can be represented as an n-dimensional vector (n being the number of columns). Intuitively then, vectors are n-dimensional lists of numbers, such as $[1.2,4.3,5.5,4.1]$. Importantly, I'm sure you're aware of the basic rules of adding vectors together and multiplying them by scalars. For example, $$[1.2,4.3,5.5,4.1] + [1,3,2,1] = [1.2 + 1, 4.3 + 3, 5.5 + 2, 4.1 + 1] = [2.2,7.3,7.5,5.1]$$ ...in words, when adding vectors, they have to be the same length, and you add each corresponding element. That is, the first element in each vector get added together, and so on. And for scaling... $$ 2 \times [1.2,4.3,5.5,4.1] = [2.2, 8.6, 11.0, 8.2]$$ ...each element in the vector gets multiplied by the scalar. But wait! The way vectors are defined does not mention anything about the elements being NUMBERS or lists. A vector can be a set of ANY valid mathematical structure that meets the criteria of being a field. As long as the elements of a vector space can be scaled up or down by elements from a field (usually the real numbers or integers) and added together producing a new element still in the vector space. Here's the formal definiton of a vector space, the mathematical structure whose elements are vectors. Definition (Vector Space) A vector space $V$ over a field $F$ is a set of objects called vectors, which can be added, subtracted and multiplied by scalars (members of the underlying field). Thus $V$ is an abelian group under addition, and for each $f \in F$ and $v \in V$ we have an element $fv \in V$ (the product of $f\times v$ is itself in $V$.) Scalar multiplication is distributive and associative, and the multiplicative identity of the field acts as an identity on vectors. For example, the familiar vectors of numbers is from a vector space over the field $\mathbb R$. Ok, so a module is the same as a vector space, except that it is defined over a ring rather than a field. And remember, every field is a ring, so a module is a more relaxed (more general) mathematical structure than a vector space. (Adapted from < >) We should also talk about a basis of a vector space (or module). Say we have a finite set $S = \{a,b,c\}$ and we want to use this to build a module (or vector space). Well we can use this set as a basis to build module over some ring $R$. In this case, our module would be mathematically defined as:$$M = \{(x* a, y* b, z* c) \mid x,y,z \in R\}$$ or equivalently: $$M = \{(x*g, y*g, z*g) \mid x,y,z \in R, g \in S\}$$ Where $*$ is the binary "multiplication" operation of our module. But since $R$ is a ring, it also must have a second binary operation that we might call "addition" and denote with $+$. Notice I use parenthesis because the order matters, i.e. $(a,b,c) \neq (b,a,c)$. Now, every element in $M$ is of the form $\{xa,yb,zc\}$ (omitted the explicit $*$ operation for convenience) hence that forms a basis of this module. And we can add and scale each element of $M$ using elements from its underlying ring $R$. If we take the ring to be the integers, $\mathbb Z$ then we can add and scale in the following ways: $$m_1, m_2 {\in M}\\ m_1 = (3a, b, 5c) \\ m_2 = (a, 2b, c) \\ m_1 + m_2 = (3a+a, b+2b, 5c+c) = (4a, 3b, 6c) \\ 5*m_1 = 5 * (3a, b, 5c) = (5*3a, 5*b, 5*5c) = (15a, 5b, 25c)$$ This module is also a group (since every module and vector space is a group) if we only pay attention to the addition operation, but even though our generating set is a finite set like $\{a,b,c\}$, once we apply it over an infinite ring like the integers, we've constructed an infinite module or vector space. In general, we can come up with multiple bases for a vector space, however, there is a mathematical theorem that tells us that all possible bases are of the same size. This leads us to the notion of dimension. The dimension of a vector space (or module) is taken to be the size of its base. So for the example given above, the size of the base was 3 (the base has three elements) and thus that module has a dimension of 3. As another example, take for example the vector space formed by $\mathbb R^2$ where $\mathbb R$ is the set of real numbers. This is defined as: $$\mathbb R^2 = \{(x,y) \mid x,y \in \mathbb R\}$$ Basically we have an infinite set of all possible pairs of real numbers. One basis for this vector space is simply $(x,y) \mid x,y \in \mathbb R$, which feels the most natural as it is the simplest, but there's nothing forbidding us from making the basis $(2x+1.55,3y-0.718) \mid x,y \in \mathbb R$ since we end up with the same vector space. But no matter how we define our basis, it will always have 2 elements and thus its dimension is 2. When we have a vector space, say of dimension 2, like $\mathbb R^2$, we can separate out its components like so: $$ \mathbb R_x = \{(x, 0) \mid x \in \mathbb R\} \\ \mathbb R_y = \{(0, y) \mid y \in \mathbb R\} \\ \mathbb R^2 = \mathbb R_x \oplus \mathbb R_y $$ We can introduce new notation called a direct sum $\oplus$, to signify this process of building out the dimensions of a vector space by a process like $(x,0)+(0,y)=(x+0,0+y)=(x,y) \mid x,y \in \mathbb R$. Thus we can more simply say $\mathbb R^2 = \mathbb R \oplus \mathbb R$. We can also say that the base of $\mathbb R^2$ is the span of the set $\{(1,0), (0,1)\}$, denoted $span\{(1,0), (0,1)\}$ or sometimes even more simply denoted using angle brackets $\langle\ (1,0), (0,1)\ \rangle$ $span\{(1,0), (0,1)\}$ is shorthand for saying "the set composed of all linear combinations of the bases $(1,0)$ and $(0,1)$". What is a linear combination? Well, in general, a linear combination of $x$ and $y$ is any expression of the form $ax + by$ where $a,b$ are constants in some field $F$. So a single possible linear combination of $(1,0)$ and $(0,1)$ would be: $5(1,0) + 2(0,1) = (5*1,5*0) + (2*0,2*1) = (5,0) + (0,2) = (5+0, 0+2) = (5, 2)$. But all the linear combinations of $(1,0)$ and $(0,1)$ would be the expression: $\{a(1,0) + b(0,1) \mid a,b \in \mathbb R\}$ and this is the same as saying $span\{(1,0), (0,1)\}$ or $\langle\ (1,0), (0,1)\ \rangle$. And this set of all ordered pairs of real numbers is denoted by $\mathbb R^2$. What's important about bases of a vector space is that they must be linearly independent, this means that one element cannot be expressed as a linear combination of the other. For example, the base element $(1,0)$ cannot be expressed in terms of $(0,1)$. There is no expression $\not\exists a,b \in \mathbb R \land a,b\neq 0 \text{ such that }a(0,1) + b(1,0) = (1,0)$. So in summary, a basis of a vector space $V$ consists of a set of elements $B$ such that each element $b \in B$ is linearly independent and the span of $B$ produces the whole vector space $V$. Thus the dimension of the vector space dim$(V)$ is the number of elements in $B$. (Reference: The Napkin Project by Evan Chen < >) Back to Chain Groups¶ Sigh. Ok, that was a lot of stuff we had to get through, but now we're back to the real problem we care about: figuring out the homology groups of a simplicial complex. As you may recall, we had left off discussing chain groups of a simplicial complex. I don't want to have to repeat everything, so just scroll up and re-read that part if you forget. I'll wait... Let $\mathcal S = \text{{{a}, {b}, {c}, {d}, {a, b}, {b, c}, {c, a}, {c, d}, {d, b}, {a, b, c}}}$ be an oriented abstract simplicial complex (depicted below) constructed from some point cloud (e.g. data set). The n-chain, denoted $C_n(S)$ is the subset of $S$ of $n$-dimensional simplicies. For example, $C_1(S) = \text{ {{a, b}, {b, c}, {c, a}, {c, d}, {d, b}}}$ and $C_2(S) = \text{{a, b, c}}$. Now, an n-chain can become a chain group if we give it a binary operation called addition that satisfies the group axioms. With this structure, we can add together $n$-simplicies in $C_n(S)$. More precisely, an $n$-chain group is the sum of $n$-chains with coefficients from a group, ring or field $F$. I'm going to use the same $C_n$ notation for a chain group as I did for an n-chain. $$C_n(S) = \sum a_i \sigma_i$$ where $\sigma_i$ refers to the $i$-th simplex in the n-chain $C_n$, $a_i$ is the corresponding coefficient from a field, ring or group, and $S$ is the original simplicial complex. Technically, any field/group/ring could be used to provide the coefficients for the chain group, however, for our purposes, the easiest group to work with is the cyclic group $\mathbb Z_2$, i.e. the integers modulo 2. $\mathbb Z_2$ only contains $\{0,1\}$ such that $1+1=0$ and is a field because we can define an addition and multiplication operation that meet the axioms of a field. This is useful because we really just want to be able to either say a simplex exists in our n-chain (i.e. it has coefficient of $1$) or it does not (coefficient of $0$) and if we have a duplicate simplex, when we add them together they will cancel out. It turns out this is exactly the property we want. You might object that $\mathbb Z_2$ is not a group because it doesn't have an inverse, e.g. $-1$, but in fact it does, the inverse of $a$, for example, is $a$. Wait what? Yes, $a = -a$ under $\mathbb Z_2$ because $a + a = 0$. That's all that's required for an inverse to exist, you just need some element in your group such that $a+b=0; \forall a,b \in G$ ($G$ being a group). If we use $\mathbb Z_2$ as our coefficient group, then we can essentially ignore simplex orientation. That makes it a bit more convenient. But for completeness sake, I wanted to incorporate orientations because I've most often seen people use the full set of integers $\mathbb Z$ as coefficients in academic papers and commercially. If we use a field with negative numbers like $\mathbb Z$ then our simplices need to be oriented, such that $[a,b] \neq [b,a]$. This is because, if we use $\mathbb Z$, then $[a,b] = -[b,a]$, hence $[a,b] + [b,a] = 0$. Our ultimate goal, remember, is to mathematically find connected components and $n$-dimensional loops in a simplicial complex. Our simplicial complex $S$ from above, by visual inspection, has one connected component and one 2-dimensional loop or hole. Keep in mind that the simplex $\{a,b,c\} \in S$ is "filled in", there is no hole in the middle, it is a solid object. We now move to defining boundary maps. Intuitively, a boundary map (or just a boundary for short) of an un-oriented $n$-simplex $X$ is the set of ${ {X} \choose {n-1}}$ subsets of $X$. That is, the boundary is the set of all $(n-1)$-subsets of $X$. For example, the boundary of $\{a,b,c\}$ is $\text{ {{a,b},{b,c},{c,a}} }$. Let's give a more precise definition that applies to oriented simplices, and offer some notation. Definition (Boundary) The boundary of an $n$-simplex $X$ with vertex set $[v_0, v_1, v_2, ... v_n]$, denoted $\partial(X)$, is: $$\partial(X) = \sum^{n}_{i=0}(-1)^{i}[v_0, v_1, v_2, \hat{v_i} ... v_n], \text{ where the $i$-th vertex is removed from the sequence}$$ The boundary of a single vertex is 0, $\partial([v_i]) = 0$. For example, if $X$ is the 2-simplex $[a,b,c]$, then $\partial(X) = [b,c] + (-1)[a,c] + [a,b] = [b,c] + [c,a] + [a,b]$ Let's see how the idea of a boundary can find us a simple loop in the 2-complex example from above. We see that $[b,c] + [c,d] + [d,b]$ are the 1-simplices that form a cycle or loop. If we take the boundary of this set with the coefficient field $\mathbb Z$ then, $$\partial([b,c] + [c,d] + [d,b]) = \partial([b,c]) + \partial([c,d]) + \partial([d,b])$$ $$\partial([b,c]) + \partial([c,d]) + \partial([d,b]) = [b] + (-1)[c] + [c] + (-1)[d] + [d] + (-1)[b]$$ $$\require{cancel} \cancel{[b]} + \cancel{(-1)[b]} + \cancel{(-1)[c]} + \cancel{[c]} + \cancel{(-1)[d]} + \cancel{[d]} = 0$$ This leads us to a more general principle, a $p$-cycle is an $n$-chain in $C_n$ whose boundary, $\partial(C_n) = 0$. That is, in order to find the p-cycles in a chain group $C_n$ we need to solve the algebraic equation $\partial(C_n) = 0$ and the solutions will be the p-cycles. Don't worry, this will all make sense when we run through some examples shortly. An important result to point out is that the boundary of a boundary is always 0, i.e. $\partial_n \partial_{n-1} = 0$ Chain Complexes¶ We just saw how the boundary operation is distributive, e.g. for two simplices $\sigma_1, \sigma_2 \in S$ $$ \partial(\sigma_1 + \sigma_2) = \partial(\sigma_1) + \partial(\sigma_2)$$ Definition (Chain Complex) Let $S$ be a simplicial $p$-complex. Let $C_n(S)$ be the $n$-chain of $S$, $n \leq p$. The chain complex, $\mathscr C(S)$ is: $$\mathscr C(S) = \sum^{p}_{n=0}\partial(C_n(S)) \text{ , or in other words...}$$ $$\mathscr C(S) = \partial(C_0(S)) + \partial(C_1(S)) \ + \ ... \ + \ \partial(C_p(S))$$ Now we can define how to describe find the $p$-cycles in a simplicial complex. Definition (Kernel) The kernel of $\partial(C_n)$, denoted $\text{Ker}(\partial(C_n))$ is the group of $n$-chains $Z_n \subseteq C_n$ such that $\partial(Z_n) = 0$ We're almost there, we need a couple more definitions and we can finally do some simplicial homology. Definition (Image of Boundary) The image of a boundary $\partial_n$ (boundary of some $n$-chain), $\text{Im }(\partial_n)$, is the set of boundaries. For example, if a 1-chain is $C_1 = \{[v_0, v_1], [v_1, v_2], [v_2, v_0]\}$, then $\partial_1 = [v_0] + (-1)[v_1] + [v_1] + (-1)[v_2] + [v_2] + (-1)[v_0]$ $\text{Im }\partial_1 = \{[v_0-v_1],[v_1-v_2],[v_2-v_0]\}$ So the only difference between $\partial_n$ and Im $\partial_n$ is that the image of the boundary is in set form, whereas the boundary is in a polynomial-like form. Definition ($n^{th}$ Homology Group) The $n^{th}$ Homology Group $H_n$ is defined as $H_n$ = Ker $\partial_n \ / \ \text{Im } \partial_{n+1}$. Definition (Betti Numbers) The $n^{th}$ Betti Number $b_n$ is defined as the dimension of $H_n$. $b_n = dim(H_n)$ More group theory¶ We've reached an impasse again requiring some exposition. I casually used the notion $/$ in defining a homology group to be Ker $\partial_n \ / \ \text{Im } \partial_{n+1}$. The mathematical use of this notation is to say that for some group $G$ and $H$, a subgroup of $G$, then $G / H$ is the quotient group. Ok, so what is a quotient group? Alright, we need to learn more group theory. And unfortunately it's kind of hard, but I'll do my best to make it intuitive. Definition (Quotient Group) For a group $G$ and a normal subgroup $N$ of $G$, denoted $N \leq G$, the quotient group of $N$ in $G$, written $G/N$ and read "$G$ modulo $N$", is the set of cosets of $N$ in $G$. (Source: Weisstein, Eric W. "Quotient Group." From MathWorld--A Wolfram Web Resource.) For now you can ignore what a normal subgroup means because all the groups we will deal with in TDA are abelian groups, and all subgroups of abelian groups are normal. But this definition just defines something in terms of something else called cosets. Annoying. Ok what is a coset? Definition (Cosets) For a group $(G, \star)$, consider a subgroup $(H, \star)$ with elements $h_i$ and an element $x$ in $G$, then $x\star{h_i}$ for $i=1, 2, ...$ constitute the left coset of the subgroup $H$ with respect to $x$. (Adapted from: Weisstein, Eric W. "Left Coset." From MathWorld--A Wolfram Web Resource.) So we can ask what the left (or right) coset is of a subgroup $H \leq G$ with respect to some element $x \in G$ and that gives us a single coset, but if we get the set of all left cosets (i.e. the cosets with respect to every element $x \in G$) then we have our quotient group $G\ /\ H$. For our purposes, we only need to concern ourselves with left cosets, because TDA only involves abelian groups, and for abelian groups, left cosets and right cosets are the same. (We will see an example of a non-abelian group). We'll reconsider the equilateral triangle and its symmetries to get a better sense of subgroups, quotient groups and cosets. Remember, by simple visualization we identified the types of operations we could perform on the equilateral triangle that preserve its structure: we can rotate it by 0, 120, or 240 degrees and we can reflect it across 3 lines of symmetry. Any other operations, like rotating by 1 degree, would produce a different structure when embedded in, for example, two-dimensional Euclidian space. We can build a set of these 6 group operations: $$S = \text{{$rot_0$, $rot_{120}$, $rot_{240}$, $ref_a$, $ref_b$, $ref_c$}}$$ ...where $rot_0$ and so on means to rotate the triangle about its center 0 degrees (an identity operation), and $ref_a$ means to reflect across the line labeled $a$ in the picture above. For example, we can take the triangle and apply two operations from $S$, such as $rot_{120}, ref_a$ (note I'm being a bit confusing by labeling the vertices of the triangle $a,b,c$ but also labeling the lines of reflection $a,b,c$, but it should be obvious by context what I'm referring to.) So does $S$ form a valid group? Well it does it we define a binary operation for each pair of elements it contains. And the operation $a \star b$ for any two elements in $S$ will simply mean "do $a$, then do $b$". The elements of $S$ are actions that we take on the triangle. We can build a multiplication (or Cayley) table that shows the result of applying the operation for every pair of elements. Here's the Cayley table: Notice that this defines a non-commutative (non-abelian) group, since in general $a \star b \neq b \star a$. Now we can use the Cayley table to build a Cayley diagram and visualize the group $S$. Let's recall how to build a Cayley diagram. We will first start with our vertices (aka nodes), one for each of the 6 actions in our group $S$. Then we need to figure out the minimum generator for this group, that is, the minimal subset of $S$ that with various combinations and repeated applications of the group operation $\star$ will generate the full 6 element set $S$. It turns out that you just need $\{rot_{120}, ref_a\}$ to generate the full set, hence that subset of 2 elements is the minimal generating set. Now, each element in the generating set is assigned a different colored arrow, and thus starting from a node $a$ and following a particular arrow to another element $b$, means that $a \star g = b$ where $g$ is an element from the generating set. Thus for $S$, we will have a graph with two different types of arrows, and I will color the $rot_{120}$ arrow as blue and the $ref_a$ arrow as red. Then we use our Cayley table from above to connect the nodes with the two types of arrows. Here's the resulting Cayley diagram: For the curious, it turns out this group is the smallest non-abelian finite group, it's called the "Dihedral group of order 6", and can be used to represent a number of other things besides the symmetry actions on an equilateral triangle. We will refer to both this Cayley table and the Cayley diagram to get an intuition for the definitions we gave earlier for subgroups, cosets and quotient groups. Let's start by revisiting the notion of a subgroup. A subgroup $(H,\star)$ of a group $(G,\star)$ (often denoted $H < G$) is merely a subset of $G$ with the same binary operation $\star$ that satisfies the group axioms. For example, every group has a trivial subgroup that just includes the identity element (any valid subgroup will need to include the identity element to meet the group axioms). Consider the subgroup $W \leq S = \{rot_0, rot_{120}, rot_{240}\}$. Is this a valid subgroup? Well yes because it is a subset of $S$, contains the identity element, is associative, and each element has an inverse. For this example, the subgroup $W < S$ forms the outer circuit in the Cayley diagram (nodes highlighted green): Okay, so a subgroup is fairly straightforward. What about a coset? Well referring back to the definition given previously, a coset is in reference to a particular subgroup. So let's consider our subgroup $W\leq S$ and ask what the left cosets of this subgroup are. Now, I said earlier that we only need to worry about left cosets because in TDA the groups are all abelian, well that's true, but the group of symmetries of the equilateral triangle is not not an abelian group thus the left and right cosets will, in general, not be the same. We're just using the triangle to learn about group theory, once we get back to the chain groups of persistent homology, we'll be back to abelian groups. Recall that the left cosets of some subgroup $H\leq G$ are denoted $xH = \{x\star{h} \mid \forall h \in H; \forall x \in G$} And for completeness, the right cosets are $Hx = \{{h}\star{x} \mid \forall h \in H; \forall x \in G$} Back to our triangle symmetries, group $S$ and its subgroup $W$. Recall, $W \leq S = \{rot_0, rot_{120}, rot_{240}\}$. To figure out the left cosets then, we'll start by choosing an $x\in S$ where $x$ is not in our subgroup $W$. Then we will multiply $x$ by each element in $W$. Let's start with $x = ref_a$. So $ref_a \star \{rot_0, rot_{120}, rot_{240}\} = \{ref_a \star rot_0, ref_a \star rot_{120}, ref_a \star rot_{240}\} = \{ref_a, ref_b, ref_c\}$. So the left coset with respect to $ref_a$ is the set $\{ref_a, ref_b, ref_c\}$. Now, we're supposed to do the same with another $x \in S, x \not\in W$ but if we do, we just get the same set: $\{ref_a, ref_b, ref_c\}$. So we just have one left coset. It turns our for this subgroup, the right and left coset are the same, the right being: $\{rot_0\star ref_a, rot_{120}\star ref_a, rot_{240}\star ref_a \} = \{ref_a, ref_b, ref_c\}$. (Reference: < >) Interestingly, since all Cayley diagrams have symmetry themselves, in general the left cosets of a subgroup will appear like copies of the subgroup in the Cayley diagram. If you consider our subgroup $W \leq S = \{rot_0, rot_{120}, rot_{240}\}$, it forms this outer "ring" in the Cayley diagram, and the left coset is the set of vertices that forms the inner "ring" of the diagram. So it's like they're copies of each other. Here's another example with the subgroup being $\{rot_0, ref_a\}$: So we begin to see how the left cosets of a subgroup of a group appear to evenly partition the group into pieces of the same form as the subgroup. With the subgroup being $W \leq S = \{rot_0, rot_{120}, rot_{240}\}$ we could partition the group $S$ into two pieces that both have the form of $W$, whereas if the subgroup is $\{rot_0, ref_a\}$ then we can partition the group $S$ into 3 pieces that have the same form as the subgroup. This leads us directly to the idea of a quotient group. Recall the definition given earlier: For a group $G$ and a normal subgroup $N$ of $G$, denoted $N \leq G$, the quotient group of $N$ in $G$, written $G/N$ and read "$G$ modulo $N$", is the set of cosets of $N$ in $G$. A normal subgroup is just a subgroup in which the left and right cosets are the same. Hence, our subgroup $W \leq S = \{rot_0, rot_{120}, rot_{240}\}$ is a normal subgroup as we discovered. We can use it to construct the quotient group, $S / W$. Now that we know what cosets are, it's easy to find what $S / W$, it's just the set of (left or right, they're the same) cosets with respect to $W$, and we already figured that out, the cosets are just: $$ S\ /\ W = \{\{rot_0, rot_{120}, rot_{240}\}, \{ref_a, ref_b, ref_c\}\}$$ (we include the subgroup itself in the set since the cosets of a subgroup technically includes itself). Okay so this is really interesting for two reasons, we've taken $S\ /\ W$ and it resulted in a set with 2 elements (the elements themselves being sets), so in a sense, we took an original set (the whole group) with 6 elements and "divided" it by a set with 3 elemenets, and got a set with 2 elements. Seem familiar? Yeah, seems like the simple arithmetic $6\ /\ 3=2$. And that's because division over the real numbers is defined in exactly this way, using cosets and quotient groups. The second reason it's interesting, is that the two elements in our quotient group are the basic two kinds of operations on our triangle, namely rotation operations and reflection operations. I also just want to put out that our resulting quotient group $ S\ /\ W $ is in fact itself a group, that is, it meets all the group axioms, and in this example, is isomorphic to the integers modulo 2 ($\mathbb Z_2$). So intuitively, whenever you want some quotient group $A\ /\ B$ where $B \leq A$ (B is a subgroup of A), just ask yourself, "how can I partition $A$ into $B$-like pieces?" And the partitions do NOT need to be non-overlapping. In this case our partition was non-overlapping, i.e. each coset in the quotient group had no elements in common, but that is not always the case. Consider the cyclic group $\mathbb Z_4$ with the single generator $1$: We could partition this group into pieces of 2, but there are in fact two ways to do this. We could make a subgroup $N \leq \mathbb Z_4 = \{0,2\}$, which would partition the space into only two pieces (there are 2 left cosets, hence our quotient group is of size 2). We've depicted this below, where each "piece" is the pair of elements "across from each other" in the Cayley diagram. $$ N = \{0,2\} \\ N \leq \mathbb Z_4 \\ \mathbb Z_4\ /\ N = \{\{0,2\},\{1,3\}\}$$ But we could also choose a subgroup $N \leq \mathbb Z_4 = \{0,1\}$ where each pair of elements is right next to each other. In this case, we can partition the group into 4 pieces (i.e. the set of left cosets or the quotient group has 4 elements). $$ N = \{0,1\} \\ N \leq \mathbb Z_4 \\ \mathbb Z_4\ /\ N = \{\{0,1\},\{1,2\},\{2,3\},\{3,0\}\}$$ The last thing I want to mention is the idea of an algebraically closed group versus non-closed groups. Basically, a group that is closed is one in which the solution to any equation with the group is also contained in the group. For example, if we consider the cyclic group $\mathbb Z_2$ which consists of $\{0,1,2\}$, then the solution to the equation $x^2 = 1$ is $1$ which is in our group $\{0,1,2\}$. However, if we can come up with an equation whose solution is not in $\mathbb Z_2$ but say only found in the reals $\mathbb R$, then our group is not closed. In fact, it's quite easy, just ask the solution to $x/3=1$ and we realized the solution, $3$, is not in $\mathbb Z_2$. (Ref: < >).
http://outlace.com/TDApart2.html
CC-MAIN-2018-17
refinedweb
10,191
59.23
#include <unparametric.h> Inheritance diagram for UnparametricField: The field's element type. Type K must provide a default constructor, a copy constructor, a destructor, and an assignment operator. Reimplemented in GivaroRational, Local2_32, and PIR_ntl_ZZ_p. Type of random field element generators. 0 1 [inline] Builds this field to have characteristic q and cardinality qe. This constructor must be defined in a specialization. construct this field as copy of F. XML constructor Takes in an XML reader and attempts to convert the XML representation over to a valid field. As this class is mostly a wrapper for a particular field type, the XML does little more than encode the cardinality of this field, and perhaps the characteristic Default constructor. Constructor from field object. x := y. Caution: it is via cast to long. Good candidate for specialization. Reimplemented in GivaroRational. x := y. Caution: it is via cast to long. Good candidate for specialization. --dpritcha c := cardinality of this field (-1 if infinite). c := characteristic of this field (zero or prime). x == y x == 0 x == 1 x := y + z x := y - z x := y*z x := y/z Reimplemented in NTL_PID_zz_p. x := -y x := 1/y z := a*x + y x := x + y x := x - y x := x*y x := x/y x := -x x := 1/x y := a*x + y Print field. Read field. Print field element. Read field element. Constant access operator. Access operator. [protected]
http://www.linalg.org/linbox-html/classLinBox_1_1UnparametricField.html
crawl-001
refinedweb
234
61.12
In Java 11, the java launcher has been enhanced to run single-file source code programs directly, without having to compile them first. For example, consider the following class that simply adds its arguments: import java.util.*; public class Add { public static void main(String[] args) { System.out.println(Arrays.stream(args) .mapToInt(Integer::parseInt) .sum()); } } In previous versions of Java, you would first have to compile the source file and then run it as follows: $ javac Add.java $ java Add 1 2 3 6 In Java 11, there is no need to compile the file! You can run it directly as follows: $ java Add.java 1 2 3 6 It’s not even necessary to have the “.java” extension on your file. You can call the file whatever you like but, if the file does not have the “.java” extension, you need to specify the --source option in order to tell the java launcher to use source-file mode. In the example below, I have renamed my file to MyJava.code and run it with --source 11: $ java --source 11 MyJava.code 1 2 3 6 It gets even better! It is also possible to run a Java program directly on Unix-based systems using the shebang ( #!) mechanism. For example, you can take the code from Add.java and put it in a file called add, with the shebang at the start of the file, as shown below: #!/path/to/java --source 11 import java.util.*; public class Add { public static void main(String[] args) { System.out.println(Arrays.stream(args) .mapToInt(Integer::parseInt) .sum()); } } Mark the file as executable using chmod and run it as follows: $ chmod +x add $ ./add 1 2 3 6 Hey thanks for sharing the valuable information. it is really helpful for us, I need some help for my Website
https://www.javacodegeeks.com/2019/01/running-single-programs-shebang-scripts.html
CC-MAIN-2019-09
refinedweb
306
66.54
Our goal in this tutorial is to show a minimal example of an Enaml user interface and introduce a few basic concepts. It sets up a minimal GUI to display a simple message. Let’s get started with a minimalist “hello world” example. Enaml interfaces are described in a file with the ”.enaml” extension. While the code has some similarities to Python, Enaml is a separate language. Here is our minimalist .enaml file describing a message-displaying GUI (download here): #------------------------------------------------------------------------------ # Copyright (c) 2012, Enthought, Inc. # All rights reserved. #------------------------------------------------------------------------------ from enaml.widgets.api import Window, Container, Label enamldef Main(Window): attr message = "Hello, world!" Container: Label: text = message Use the enaml-run utility to run it from the command line with $ enaml-run hello_world_view.enaml The resulting GUI looks like this (on Mac OSX): Let’s take a closer look at the Enaml file. An Enaml view is made up of a series of component definitions that look a lot like Python classes. In the first line of code, we are defining a new component, Main, which derives from Window, a builtin widget in the Enaml library. enamldef Main(Window): With this line of code, we have defined the start of a definition block. In general, we could call this almost anything we want, as long as it is a Python-valid name. In this case, however, by giving it the special name Main we get to run it from the command line with the enaml-run tool. enaml-run looks for a component named Main or a function named main in an .enaml file and runs it as a standalone application. Inside a definition block, the view is defined in a hierarchical tree of widgets. As in Python , indentation is used to specify code block structure. That is, statements beginning at a certain indentation level refer to the header line at the next lower indentation level. So in our simple example, the Container belongs to Main and the Label belongs to the Container: enamldef Main(Window): attr message = "Hello, world!" Container: Label: text = message The view is made up of a Window containing a Container which in turn contains a Label, whose text attribute is set equal to the message attribute of Main, which has a default value of "Hello, world!". This default value can be changed by the code which creates an instance of Main. (We’ll discuss this in more detail in the next tutorial.) Just like regular Python objects, the widgets used in an Enaml UI must be defined and/or imported before they can be used. The widgets used in this tutorial are imported from enaml.widgets.api. Now we’ll take a look at how to use the view in Python code. First, we import Enaml: import enaml Then we use enaml.imports() as a context manager for importing the Enaml view. with enaml.imports(): from hello_world_view import Main Enaml is an inherently asynchronous toolkit, with a server running an application which offers UI sessions that a client may view. For this simple example, we’ll be working with the client and server both running locally and in the same process. Enaml has some utility functions to help with these common situations. The only thing we need to do is to pass the view to the show_simple_view function, which can be imported from the Enaml standard library: from enaml.stdlib.sessions import show_simple_view main_view = Main() show_simple_view(main_view) This function will take care of creating the server side session factory and application, spawning a client side session, and starting the applications main event loop. In the next example we will see how we can get to the lower level and directly control the session and how we interact with the application.
http://docs.enthought.com/enaml/instructional/tut_hello_world.html
CC-MAIN-2015-40
refinedweb
625
62.98
Contents Abstract This PEP describes an interpretation of multiline string constants for Python. It suggests stripping spaces after newlines and stripping a newline if it is first character after an opening quotation. Rationale This PEP proposes an interpretation of multiline string constants in Python. Currently, the value of string constant is all the text between quotations, maybe with escape sequences substituted, e.g.: def f(): """ la-la-la limona, banana """ def g(): return "This is \ string" print repr(f.__doc__) print repr(g()) prints: '\n\tla-la-la\n\tlimona, banana\n\t' 'This is \tstring' This PEP suggest two things: - ignore the first character after opening quotation, if it is newline - ignore in string constants all spaces and tabs up to first non-whitespace character, but no more than current indentation. After applying this, previous program will print: 'la-la-la\nlimona, banana\n' 'This is string' To get this result, previous programs could be rewritten for current Python as (note, this gives the same result with new strings meaning): def f(): """\ la-la-la limona, banana """ def g(): "This is \ string" Or stripping can be done with library routines at runtime (as pydoc does), but this decreases program readability. Implementation I'll say nothing about CPython, Jython or Python.NET. In original Python, there is no info about the current indentation (in spaces) at compile time, so space and tab stripping should be done at parse time. Currently no flags can be passed to the parser in program text (like from __future__ import xxx). I suggest enabling or disabling of this feature at Python compile time depending of CPP flag Py_PARSE_MULTILINE_STRINGS. Alternatives New interpretation of string constants can be implemented with flags 'i' and 'o' to string constants, like: i""" SELECT * FROM car WHERE model = 'i525' """ is in new style, o"""SELECT * FROM employee WHERE birth < 1982 """ is in old style, and """ SELECT employee.name, car.name, car.price FROM employee, car WHERE employee.salary * 36 > car.price """ is in new style after Python-x.y.z and in old style otherwise. Also this feature can be disabled if string is raw, i.e. if flag 'r' specified.
http://docs.activestate.com/activepython/3.6/peps/pep-0295.html
CC-MAIN-2018-09
refinedweb
360
61.46
hi, folks I am using these plugins in my cordova project. com.ibm.mobile.cordova.ibmbluemix version com.ibm.mobile.cordova.ibmpush version 1.0.0-20141113-1411. I can handle push notifications without any problem. So, this is great, I can build my powerful app based on that. But there are side effects, the underscore is polluted by IBMBluemixHybrid.js. I am using [underscorejs][1] version1.7.0 in my project. But it only support the 1.6.0 API, the cause is IBMBluemixHybrid.js also exports "_". var umodule = (function (require, exports, module) {var umodule = (function (require, exports, module) { define('ibm/mobile/lib/IBMUnderscore', ['require','exports','module'],function (require, exports, module) { Hello Hain, Can you please provide details on the specific errors/issues you think are caused by IBMUnderscore? Jag hi, Jayg The js file of com.ibm.mobile.cordova.ibmbluemix exports some varibles such as IBMBluemix, IBMPush. But it also exports __ . If I also include the understorejs into my index.html, the __ is not avaible. Please just have a look at the IBMUnderscore definition in IBMBluemixHybrid.js. I think it is stupid to export underscore into global namespace. Hain Answer by Dave Cariello (2901) | Dec 01, 2014 at 03:41 PM Hain, There is no difference between IBMUnderscore and [underscore][2]. I can work to update our Plugin to export the latest underscore if you desire. Would that fix your issue? I just found this thread after experiencing issues with the globally exported _ variable. I am using lodash instead of underscore, and in this case IBMBluemix completely screws up lodash's export. So no, updating IBMunderscore will not resolve the issue. IBMBluemix should not export _ at all. 33 people are following this question. What causes this error when trying to initialize IBM push 2 Answers Hybrid mobile push with cordova and bluemix 1 Answer 401 not authorized when accessing REST service in Mobile Data 3 Answers Customize payload in Mobile Push service 1 Answer Push Bluelist Application : Google Play Services not available 1 Answer
https://developer.ibm.com/answers/questions/166120/ibmbluemixhybridjs-should-not-exports-into-global.html
CC-MAIN-2019-43
refinedweb
341
60.41
This. Over the past few months i have seen quite a few questions about converting SVG files to pure Java2D painting code. Since no such converter (to the best of my knowledge) exists (at least in the open-source world), this has been implemented in the latest drop ofFlamingo project (release candidate of version 1.1 code-named Briana is scheduled for October 30). How do you run it? Very simple - click on the WebStart link below, grant all permissions (it needs read access to read the SVG files and write access to create the Java2D-based classes), use the breadcrumb bar to navigate to a folder that contains SVG files, wait for them to appear (they'll be loaded asynchronously) and just start clicking on the icons. Clicking on an icon will create a Java class under the same folder with the same name (spaces and hyphens are replaced by the underscores). The class will have a single static paint method that gets a Graphics2Dobject and paints the icon. Note that before you call this method, you can set any AffineTransform on the Graphics2D that you pass to the method in order to scale, shear or rotate the painting.uses this only on 2 out of 203 icons. If you want to chip in and provide the support - you're welcome. Here is how you use the generated code: public class Test extends JFrame { public static class TestPanel extends JPanel { @Override public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D) g.create(); g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR); g2.translate(10, 10); address_book_new.paint(g2); g2.translate(50, 0); g2.transform(AffineTransform.getScaleInstance(2.0, 2.0)); internet_web_browser.paint(g2); g2.dispose(); } } public Test() { super("SVG samples"); this.setLayout(new BorderLayout()); this.add(new TestPanel(), BorderLayout.CENTER); this.setSize(180, 140); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new Test().setVisible(true); } }); } } The address_book_new and internet_web_browser are the transcoded classes. Note that before the second icon is painted, i apply the scaling transformation (to illustrate how it is done). The result is: Note how the second icon is scaled (relative to the first one). The size of the generated code is comparable to the size of the original SVG file. The first icon in the above example takes 20KB in SVG format and 22KB in Java2D code. The second icon is 50KB in SVG format and 55KB in Java2D code. The generated code itself contains a few comments to help in mapping the Java2D sections to the corresponding SVG sections, but in general you don't need to look at it at all (as you wouldn't look at the SVG contents). The implementation is quite straightforward. It uses the Apache Batiklibrary (that's why the WebStart is so big) to load the SVG file and create a renderer tree (called GVT renderer tree). The nodes in this tree can be mapped directly to Java2D code. The only tricky part is in chaining and restoring transformations on nested nodes. In addition, some Batik classes do not provide getters for the relevant properties - i had to use reflection to obtain the field values. As already mentioned, this tool has been successfully tested on the Tango iconset. Apart from two known issues (TextNode support and non-strictly increasing fractions), all the icons have been converted and displayed properly. If you're trying it on other SVG files and see UnsupportedOperationException, feel free to send me the relevant SVG file. Happy converting.
https://community.oracle.com/blogs/kirillcool/2006/10/20/svg-and-java-uis-part-6-transcoding-svg-pure-java2d-code
CC-MAIN-2017-43
refinedweb
598
55.24
In the previous two parts (Part 1 and Part 2), I introduced the ImpostorHttpModule as a way to test intranet applications that use role-based security without having to modify your group memberships. (I’ll assume that you know what I’m talking about. If not, go back and re-read the first two parts.) In the final part, let’s look at what exactly is going on behind the scenes with ImpostorHttpModule… The ImpostorHttpModule requires surprisingly little code to work its magic. Let’s think about exactly what we want to do. We want to intercept every HTTP request and substitute the list of roles defined for the incoming user in the ~/App_Data/Impostors.xml file instead of the user’s actual roles. (In an intranet scenario, a user’s roles are often just the local and domain groups to which the user belongs.) To do this, we need to implement a HttpModule. We’ll start with the simplest HttpModule, which we’ll call NopHttpModule for “No operation”. using System.Web; namespace JamesKovacs.Web.HttpModules { public class NopHttpModule : IHttpModule { public void Init(HttpApplication context) { } public void Dispose() { } } } To be a HttpModule, we simply need to implement IHttpModule and provide implementations for the two methods, Init() and Dispose(). We now have to register ourselves with the ASP.NET pipeline. We do this using the <httpModules> section of Web.config. <?xml version=”1.0″?> <configuration> <system.web> <httpModules> <add name=”NopHttpModule” type=”JamesKovacs.Web.HttpModules.NopHttpModule, JamesKovacs.Web.HttpModules”/> </httpModules> </system.web> </configuration> That’s it. Not terribly interesting because it does absolutely nothing. So let’s move on and implement the HelloWorldHttpModule, which simply returns “Hello, world!” no matter what you browse to, whether it exists or not! using System; using System.Web; namespace JamesKovacs.Web.HttpModules { public class HelloWorldHttpModule : IHttpModule { public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(context_BeginRequest); } void context_BeginRequest(object sender, EventArgs e) { HttpContext.Current.Response.Write(“<html><body><h1>Hello, World!</h1></body></html>”); HttpContext.Current.Response.End(); } public void Dispose() { } } } Try browsing to /Default.aspx, /Reports/Default.aspx, /ThisDoesNotExist.aspx, or even /ThisDoesNotExistEither.jpg. They all return “Hello, World!” (N.B. ASP.NET 1.X will return a 404 for the JPEG. ASP.NET 2.0 will return “Hello, World!” In 1.X, static files were served up directly by IIS without ASP.NET getting involved. Although this gives excellent performance for images, CSS, JavaScript files, etc., it also meant that those files were not protected by ASP.NET security. With ASP.NET 2.0, all unknown files types are handled by the System.Web.DefaultHttpHandler, which allows non-ASP.NET resources to be protected by ASP.NET security as well. See here for more information.) Now back to our regularly scheduled explanation… In our Init() method, we tell the HttpApplication which events we would like to be informed of. In this case, we grab the BeginRequest event, which is the first event of the ASP.NET pipeline. It occurs even before we determine if the URL is valid, hence our ability to serve up “missing content”. ASP.NET provides many hooks into its processing pipeline. Here is an excerpt from MSDN2 on the sequence of events that HttpApplication fires during processing: - BeginRequest - AuthenticateRequest - AuthorizeRequest - ResolveRequestCache - PostResolveRequestCache After the PostResolveRequestCache event and before the PostMapRequestHandler event, an IHttpHandler (a page or other handler corresponding to the request URL) is created. - PostMapRequestHandler - AcquireRequestState - PreRequestHandlerExecute The IHttpHandler is executed. - PostRequestHandlerExecute - ReleaseRequestState - PostReleaseRequestState After the PostReleaseRequestState event, response filters, if any, filter the output. - UpdateRequestCache - PostUpdateRequestCache - EndRequest The pipeline in ASP.NET 1.X had many, but not all, of these events. ASP.NET 2.0 definitely gives you much more flexibility in plugging into the execution pipeline. I’ll leave it as an exercise to the reader to investigate why you might want to capture each of the events. Armed with this information, you can probably figure out which event we want to hook in the ImpostorHttpModule. Let’s walk through the thought process anyway… We are trying to substitute the actual user’s roles/groups for one that we’ve defined in the ~/App_Data/Impostors.xml file. To do this we need to know the user. So we need to execute after the user has been authenticated. We need to execute (and substitute the groups/roles) before any authorization decisions are made otherwise you might get inconsistent behaviour. For instance, authorization may take place against your real groups/roles and succeed, but then a PrincipalPermission demand for the same group/roles might fail because the new groups/roles have been substituted. So which event fits the bill? PostAuthenticateRequest is the one we’re after. In this event, we know the user, which was determined in AuthenticateRequest, but authorization has not been performed yet as it occurs in AuthorizeRequest. public void Init(HttpApplication context) { context.PostAuthenticateRequest += new EventHandler(context_PostAuthenticateRequest); } We know which event we want to hook. Now what to do once we hook it. In .NET, we have Identities and Principals. An Identity object specifies who has been authenticated, but does not indicate membership in groups/roles. A Principal object encapsulates the groups/roles and the identity. So what we want to do is construct a new Principal based on the authenticated Identity and populate with the groups/roles that we read in from ~/App_Data/Impostors.xml. As it so happens, the built-in GenericPrinicpal fits the bill quite nicely. It takes a IIdentity object and a list of roles (in the form of an array of strings). N.B. It doesn’t matter if the Identity is a WindowsIdentity, a FormsIdentity, a GenericIdentity, or any other. All that matters is that the Identity implements the IIdentity interface. This makes the group/role substitution code work equally well regardless of authentication technology. IIdentity identity = HttpContext.Current.User.Identity; string[] roles = lookUpRoleListFromXmlFile(identity); // pseudo-code IPrincipal userWithRoles = new GenericPrincipal(identity, roles); Armed with userWithRoles, we just need to patch it into the appropriate places: HttpContext.Current.User = userWithRoles; Thread.CurrentPrincipal = userWithRoles; We have discarded the original principal (but kept the original identity) and patched in our custom one. That’s about it. Any authorization requests are evaluated against the new GenericPrincipal and hence the group/role list that we substituted. An additional feature I would like to point out is caching of the users/roles as you probably don’t want to parse a XML file on every request. The users/roles list will auto-refresh if the underlying ~/App_Data/Impostors.xml file changes. Let’s see how this works. We store a Dictionary<string, string[]> in the ASP.NET Cache, which contains users versus roles as parsed from the ~/App_Data/Impostors.xml file. If it doesn’t exist in the Cache, we parse the XML file and insert it into the Cache along with a CacheDependency like this: HttpContext.Current.Cache.Insert(“ImpostorCache”, impostors, new CacheDependency(pathToImpostorsFile)); When the underlying file changes, the entry is flushed from the cache. The next time the code runs, the cache is re-populated with the contents of the updated ~/App_Data/Impostors.xml. One last point… The ImpostorHttpModule is meant for development/testing purposes, which means that I haven’t optimized it for performance, but for ease of implementation and comprehension. So there you have it – the ImpostorHttpModule. Hopefully you have a better appreciation for the power and extensibility built into ASP.NET as well as some cool ideas of what else you can implement using HttpModules. Full source code can be found here.
http://jameskovacs.com/2006/05/20/pulling-back-the-covers-on-impostorhttpmodule/
CC-MAIN-2017-13
refinedweb
1,255
50.43
#include <KNewPasswordWidget> Detailed Description A password input widget. This widget allows the user to enter a new password. The password has to be entered twice to check if the passwords match. A hint about the strength of the entered password is also shown. In order to embed this widget in your custom password dialog, you may want to connect to the passwordStatus() signal. This way you can e.g. disable the OK button if the passwords don't match, warn the user if the password is too weak, and so on. Usage Example Setup Update your custom dialog Accept your custom dialog - Since - 5.16 Definition at line 73 of file knewpasswordwidget.h. Member Enumeration Documentation Status of the password being typed in the widget. Definition at line 95 of file knewpasswordwidget.h. Property Documentation - Since - 5.31 Definition at line 87 of file knewpasswordwidget.h. Constructor & Destructor Documentation Constructs a password widget. - Parameters - Definition at line 197 of file knewpasswordwidget.cpp. Destructs the password widget. Definition at line 203 of file knewpasswordwidget.cpp. Member Function Documentation Allow empty passwords? - Returns - true if minimumPasswordLength() == 0 The color used as warning for the verification password field's background. Whether the password strength meter is visible. Definition at line 243 of file knewpasswordwidget.cpp. Whether the visibility trailing action in the line edit is visible. - Since - 5.31 Definition at line 248 of file knewpasswordwidget.cpp. Maximum acceptable password length. Minimum acceptable password length. Returns the password entered. - Note - Only returns meaningful data when passwordStatus is either WeakPassword or StrongPassword. Definition at line 253 of file knewpasswordwidget.cpp. The current status of the password in the widget. Notify about the current status of the password being typed. Password strength level below which a warning is given. Password length that is expected to be reasonably safe. Allow empty passwords? - Default: true. same as setMinimumPasswordLength( allowed ? 0 : 1 ) Definition at line 258 of file knewpasswordwidget.cpp. When the verification password does not match, the background color of the verification field is set to color. As soon as the passwords match, the original color of the verification field is restored. Definition at line 292 of file knewpasswordwidget.cpp. Maximum acceptable password length. - Parameters - Definition at line 270 of file knewpasswordwidget.cpp. Minimum acceptable password length. Default: 0 - Parameters - Definition at line 264 of file knewpasswordwidget.cpp. Whether to show the password strength meter (label and progress bar). Default is true. Definition at line 298 of file knewpasswordwidget.cpp. Set the password strength level below which a warning is given The value is guaranteed to be in the range from 0 to 99. Empty passwords score 0; non-empty passwords score up to 100, depending on their length and whether they contain numbers, mixed case letters and punctuation. Default: 1 - warn if the password has no discernable strength whatsoever - Parameters - Definition at line 287 of file knewpasswordwidget.cpp. Password length that is expected to be reasonably safe. The value is guaranteed to be in the range from 1 to maximumPasswordLength(). Used to compute the strength level Default: 8 - the standard UNIX password length - Parameters - Definition at line 282 of file knewpasswordwidget.cpp. Whether to show the visibility trailing action in the line edit. Default is true. This can be used to honor the lineedit_reveal_password kiosk key, for example: - Since - 5.31 Definition at line 304 of file knewpasswordwidget.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2019 The KDE developers. Generated on Thu Apr 18 2019 02:40:46 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kwidgetsaddons/html/classKNewPasswordWidget.html
CC-MAIN-2019-30
refinedweb
609
52.87
Forum:How Did You Get to Uncyclopedia? From Uncyclopedia, the content-free encyclopedia This page has been given a permanent lease on life. Add new entries to Uncyclopedia:Customs Station. We all got here, one way or another. But telling HOW we got here could add some small window into how to better get more pathetic losers users to our site. It might also let us know how we would go about promoting Uncyclopedia, if that's at all necessary, which it probably isn't. Ok, well, since this is my horrible idea, I'll go first.... flyingfeline Google search for AAAAAAAAAAAA. Instant boredom cure. Spammed quotes, got banned, whined on IRC, got unbanned, made some stupid mistakes, had a tantrum and finally solved the problem through hypnotism. Hey, it works for giving up cigarettes. -- 16:07, 28 November 2006 (UTC) The anarch I found this site while checking, which is a science blog dealing with genetics/biology/evolution et cetera. A lot of this stuff is crap but ive found some gems as well, and I plan to write (more) articles myself when I have enough time. --The anarch 11:38, 28 November 2006 (UTC) El PACO I got here from none other than Google Image. When I'm REALLY bored I'll search for funny pictures. I can across a particular picture and clicked on it. The picture directed me to here, Crap Cyclopedia. --Paco 05:18, 27 November 2006 (UTC) MoosePie Well, I stumbled onto Uncyclopedia by accident really (Like two years ago I think). I was bored one day, and thanks to the power the the Internet, I found this site, and I've been using it as a tool to escape from boredom ever since. It wasn't until recently that I decided to sign up for it, so I could add some of my infinite wisdom (yeah right) to this fine place. --MoosePie 23:07, 31 January 2007 (UTC) Bradaphraser I got here from a link on Blue's News to the John Carmack article. It wasn't that great, but I started searching and found War on Terra, among other hilarious things, which kinda blew me away. Circa early Summer of 2005.--<< >> 18:08, 24 August 2006 (UTC) Shandon I was contributing to (generally) the humanities desk at wikipedia, and somebody noted uncyclopedia in a response. I checked it out, thought it was kind of funny, and wrote a couple of articles anonymously that were immediately huffed. I began learning definitions (what's a noob? what's burninated mean?), and had a conversation with some users and admins (notably Sbluen and Tompkins) as I/we created a worthwhile article...Vincent Van Gogh's Things To Do on a Rainy Afternoon.--Shandon 18:37, 24 August 2006 (UTC) Composure1 I've heard of Uncyclopedia in some news articles early this year I think, but didn't really visit it much at the time. Then one day (in March I think) fark.com linked to the UnNews article about Microsoft copyrighting the letter "e". I found that hilarious and checked back for other UnNews stories frequently, finally starting to write my own in April. --Composure1 19:42, 24 August 2006 (UTC) Mhaille I honestly have no idea anymore. It was May of last year, and I remember laughing myself sick at Kitten Huffing (which was a much shorter, funnier article at the time), but I can't actually remember how I got here. I can't seem to find my way out either.... -- Sir Mhaille (talk to me) Sir Cornbread Funny story, actually. I made a bunch of hilarious bullshit articles on Wikipedia, and the admin who banned me suggested I come here, where articles like that would be accepted. The rest is history. -- Sir C Holla | CUN 20:00, 24 August 2006 (UTC) Rcmurphy I found Uncyclopedia around early March (2005) via a link on the Straight Dope message board, which I only read sporadically these days. My first "major" project was working on the original Zork with Algorithm. Back in those days you could check every single edit on Recent Changes easily - sometimes there were only a couple an hour. The front page wasn't protected and even after we instituted a "featured article" system anyone could change the highlighted article at will. Good times... —rc (t) 20:07, 24 August 2006 (UTC) Rataube December, 2005. I was starting to get interested in Wikipedia, about to make my first edits when an asshole a good friend of mine told me "hey, you would probably like Uncyclopedia better". I got so strongly seduced by its hypnotizing powers that I started a "Vote me for Noob of the Month Campaign". When I realized I had no chance I quit the campaign and gave my support to user:Suresh. But user:MoneySign beat us both. Damn you Moneysign! If you want to led me away you'll have to try:17, 24 August 2006 (UTC) Codeine I found a link in May last year on a blog. I can't remember which blog, nor for that matter why on earth I was wasting my valuable time reading a blog, but it was love at first sight. With Uncyclopedia, I mean. Not the blog. Ugh. Horrid things. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 23:57, 24 August 2006 (UTC) Tompkins A friend of mine's little brother had stumbled upon it somehow, I'm not exactly sure how he found it, although I assume it was on another humor site of some sort. The first article I ever read was "500 foot Jesus," I think the second was "Voice Phones." Good:03, 25 August 2006 (UTC) Rangeley Last June I searched for some random thing on Google and a result from a mysterious "Uncyclopedia" showed up. I investigated the site, and found the idea of it simply stunning. I started reading a bunch of history related articles, and decided to help improve them a bit and I have been around here since:09, 25 August 2006 (UTC) Imrealized My shrink suggested I come here to work out some issues. I was also sleeping with her at the time, so it was doubly imperative that I listen. She told me it would be a good way to get past my "delusions of grandeur", having strangers mercilessly lambast my writing and tell me that I suck. Unfortunately that never happened, as I was immediately embraced with the wide open arms of an inordinate amount of users and several sockpuppets. Of course, I was hooked. Actually, it was a brief stint on wikipedia which brought me here, but that one was already taken. -- Imrealized ...hmm? 00:15, 25 August 2006 (UTC) Savethemooses Somebody posted about it on, of all things, a Kansas City Chiefs (American Football team) Message Board. This was March of 2005. I joined and immediately became the best thing ever to happen to Uncyclopedia, and remain as thus to this day. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 00:35, 25 August 2006 (UTC) Insertwackynamehere Sometime in early 2005, probably January or so when I first heard, but March when I first really got into it, I heard about Uncyclopedia. It was a fun change from Kingdom of Loathing, the other website I wasted my time in computer class with. Then I got hooked on Uncyc more and more and yeah. Now here I 01:20, 25 August 2006 (UTC) Procopius I stumbled upon Wikipedia's Bad Jokes and Other Deleted Nonsense last June. One day I noticed that someone pasted a very funny Calvin and Hobbes vandalism to Wikipedia's entry on the comic strip. Someone immediately said was from Uncyclopedia. Intrigued, I clicked over, saw Ernest Hemingway's Cookery Corner, and got hooked. First thing I ever did (as an IP) was work on American Civil War.--Procopius 03:06, 25 August 2006 (UTC) Witt E, Sitting during free period in school, searching wikipedia looking for anything funny to relieve the monotony of the standard schoolday. I went from joke, to humor, to pun and eventually made my way to parody. After begrudgingly reading through parody (because I had not read it and I like parody) I spotted a sentence at the very bottom about Wikipedia has it's own humorous parody, Uncyclopedia. After reading about half the page and stifling my laughter I found it, the link to Uncycs main page. Now here I am, not standing out in my work but doing my part with the time I have.--Wit (tawk) 03:52, 25 August 2006 (UTC) sbluen I was looking at the Bad Jokes and Other Deleted Nonsense on March and I wanted more. So I looked at the external links and found a whole wiki for it. --Sbluen 04:51, 25 August 2006 (UTC) Dawg I don't recall exactly, but it involved a group of ubergeek hackers I used to hang out with on SILC. It also had something to do with Wikipedia. A few things came together at the right time, coupled with Splarka's hospitality and training that made me into the humourless admin you see today. I think they wanted me to stick around and keep up my cruft crushing activities. It's a wonderful site. » Brig Sir Dawg | t | v | c » 05:10, 25 August 2006 (UTC) Hinoa4 At the Kingdom of Loathing's forums, there's a subforum called "Random Ramblings" which I was a semiregular of of which I was a semiregular. It's roughly equivalent to an off-topic forum, and was home to the funnier of the people there. Someone posted the link to Uncyc's article on us, which was at the time a rather pathetic stub, and a bunch of us from the forum decided to improve it. Operation Degrassi Knoll was successful, and most left. I, however, stayed, and started to look around the rest of the site. I liked what I saw, and stuck around. — Major Sir Hinoa (Plead) (KUN) 05:50, 25 August 2006 (UTC) - You play KoL? I just found out about/got hooked on that a few days ago... --User:Nintendorulez 13:33, 4 November 2006 (UTC) Mowgli I heard about Uncyclopedia in a little thread in a south asian forum where i used to participate. this was about 3/4 months ago. i checked out uncyc. & found it funny: i cut&pasted&linked from uncyc. to that forum but no one laughed. my posts were ignored. i felt guilty and i ignored uncyc. too. but that forum was prudish and it was only a matter of time before i'd collide headlong with the admins. one dark and stormy night one day i cursed and i spammed and i promptly got banned. i settled down here. -- mowgli 06:03, 25 August 2006 (UTC) Ghelæ I was on Wikipedia, middle November 2005, looking at the "list of religions" (for whatever reason, I'm not sure). Then I came across the "parody religion" article, from which I found "parody", from which I found "Uncyclopedia", to which I came to the Uncyc Main Page. Then later I found BJAODON and got to Uncyc from there too. ~ 08:04, 25 August 2006 (UTC) FreeMorpheme I was contributing to Wikipedia, but I was rapidly getting sick of their pious attitude. They were having a debate about whether to keep their article on the perpetual motion cat/toast device, and somebody said it had been sporked from Uncyclopedia. I had a little glimpse over here, and was immediately hooked like a trout. 09:11, 25 August 2006 (UTC) Hardwick Fundlebuggy A regular reader of my blog [[1]] told me that I was made for this place. They also knew me from the H2G2 [[2]] site where I used to contribute under the name of "Dr. Deckchair Funderlik" (not my real name). I was actually really nervous about contributing anything here, and as a consequence, I didn't. Until about 10 months later, when the first "Poo Lit" competition was run, so I entered with Bouncy Castle and lost. That made me laugh and laugh. I waited a week, gripping my monitor hard and licking my keyboard, and then I self-nommed the Castle and it was a big hit. The rest is history, with a dash of sociology and a huge stumpy lump of psychology thrown in. Next stop is writing a book full of large articles and trying to shift it on the black market. And that's what it was. That ... and the ceaseless voices of the dark ones. --Sir Hardwick Fundlebuggy (Bleat) 10:17, 25 August 2006 (UTC) Hindleyite I was just killing some time browsing through Wikipedia, which I had just been using to do some research for an essay on something, I think it was about teapots. Probably got side tracked. Anyway, I was looking through some of the Bad Jokes and Deleted stuff for a laugh and noticed the Uncyclopedia link on there. When I clicked it, I actually thought it was an internal link because Uncyclopedia looked like Wikipedia so much. At first, I didn't do that much, just making the odd addition or rewrite here and there until the first Poo Lit Surprise which actually got me writing some proper stuff for Uncyc. -- Hindleyite 10:44, 25 August 2006 (UTC) Todd Lyons I think I was googling (back when Google still loved us... snif) for something funny to read. I ended up here. I'm not sure why. Generally I've gotten good results with Google. ~ T. (talk) 11:00, 25 August 2006 (UTC) Ap0ll0 Decided to click a link on slashdot about an external combustion engine. I guess I made a good choice, as the article was funny and I'm still here. --Ap0ll0 23:42, 25 August 2006 (UTC) Jocke Pirat One day I was browsing off topic at a QBasic/FreeBASIC forum when I stumbled upon a topic noting Uncyclopedia. I've been here ever since. --- Jaques Pirat IS NOT FRENCH! TP, F@H 23:55, 25 August 2006 (UTC) Kakun It's Hebrew Wikipedia's bureaucrat Mr. Shay fault that I'm here. If he acted a bit more like his name he would be too ashamed to let the wp'ns know about Uncyc. I was just goofing around through the he.wp talk pages and saw this link he posted. I thought the I burning your dog article was genius and started to Hebrew spam the recent changes until they let me have an uncyc of my own. This place is too funny. -- VFP Eincyc (talk) 00:05, 26 August 2006 (UTC) emc I pressed the "random page" button on Wikipedia in search of a page in need of touch-ups (at this time I was a Wikipedian) and the page happened to be Wikipedia's Uncyclopedia article. I thought, "This looks interesting". The rest is history. Now I spend more time on Uncyclopedia, and pretty much none of my time on my Wikipedia account of the same name. --Hotadmin4u69 [TALK] 00:10, 26 August 2006 (UTC) One-eyed Jack Was on B3ta's forum back in July '05 and somebody mentioned Uncyc; it seemed like a good place for someone with compulsive writing disease. I had "corrected" a few articles on Wikipedia but was/am really not interested in contributing much (Christ knows I have to write enough serious, clear, informative piggymuffins to earn my bi-weekly crust.)----OEJ 00:37, 26 August 2006 (UTC) carlb I've been on Wikipedia (en, fr) for a little under two years now, playing mostly a bit part at best, but was a long-time fan of BJAODN back in the days when we didn't have much of anywhere else to put our nonsense. (Mind you, it's gone downhill since then - but that just means it has served its purpose and we now have other outlets to handle any sudden temptations to lease an entire server in the biggest datacentre in Vancouver and use it to dump brazillions of pages of patented nonsense™ on every nation from Taiwan to Finland to Brazil). It's in some ways unfortunate that en.wp has become so political these days, with the endless agendas and revert wars. Wikipédia isn't like that in other languages, where the community is smaller - perhaps some things scale poorly on the non-technical side. Still, they have gathered an impressive amount of info over the years and I do still turn up there occasionally, even if it's just long enough to get a couple more pieces of useless talk-page spam from some robot with an attitude. --Carlb 00:50, 26 August 2006 (UTC) Famine Link in either a sig or an article comment on slashdot.org sometime mid-Feb 05. Before we got slashdotted the first few times. Looks like I finally took the time to log in and make stupid edits around the end of March, 05. Sir Famine, Gun ♣ Petition » 01:09, 26 August 2006 (UTC) Uncyclon That osirisX dude told me about it. Modusoperandi In October 2005 I was looking for dirty pictures and stumbled somehow on Uncyc, so while here I typed "Canada" into the search box, because hosers make the best naughty photos. Finding none, I wasted a stupid amount of time adding various non-pron text to that page, eventually spilling over into sub-pages. It wasn't until much later that I made Canadians and finally added the porn reference that was sorely missing. Now you guys won't let me leave. I can't even check my email. Please, just let me go!--Sir Modusoperandi Boinc! 21:13, 27 August 2006 (UTC) Or, more likely, I followed a link from Wired or Slashdot. More likely, but my original statement is true too.--Sir Modusoperandi Boinc! 03:17, 2 September 2006 (UTC) Colonel Swordman I was doing a Google search on a serious subject last year and all of the sudden I noticed there was something wrong in one of those results. "Hmmm, ooook... What? Wait a minute..." So, who said the Internet was good for homework assignments, huh? (I was a 3rd year University student back then by the way.) -- The Colonel (talk) 04:23, 26 August 2006 (UTC) Keitei Google. March 2005. I don't really remember much... but it was at mrpalmguru then, and hmm. I remember I liked Sauron, Lord of the Dance, and it being an encyclopedia of misinformation and lies, sort of like Congress or Parliament. I think Star Trek was a cooking magazine at that point. --KATIE!! 05:21, 26 August 2006 (UTC) Zombiebaron My Dad acctaully told me about it. He found a link to it somewhere, and since I am a humor buff he figured I would enjoy it. The first article I read was "Richard Nixon", and I've been making crap articles since around September 2005. --Brigadier General Sir Zombiebaron 15:23, 26 August 2006 (UTC) DiZ I got here from my mother's vagina. No, seriously, I know it was following a link from Wikipedia's page on Uncyclopedia, but how I got there is a mystery. It's quite possible that my guardian angel led me to Uncyclopedia, or just as likely that I followed a link from Wikipedia's own page on itself. So, basically, I just confirmed what nearly everyone I have met here believes: my coming to uncyclopedia was a horrible mistake :-( <DiZ goes to sulk in a corner> -:55, 26 August 2006 (UTC) Nerd42 I got here only after I figured out that there is a difference between Wikipedia and Wiccapedia. --Nerd42Talk 18:10, 26 August 2006 (UTC) Crazyswordsman So, one day I was taking a stroll through the park, and discovered that Wikis were too serious. So I decided to go on a search for some REAL humor, and I discovered the holy Uncyclope... Wait, you're not buying it? Alright, let's start over: A giant meteor was heading towards earth. The only way to stop it was to find and help with an encyclopedia so funny the Meteor would cru... Wait, that doesn't work either? All right, here's the real scoop: I was searching for a subject in Wikia, and found it in Uncyclopedia. Crazyswordsman 02:08, 27 August 2006 (UTC) Smiddle I was depressed and screamed "AAAAAAAAAAAAAAAAAAA!!!11" all the day long,. I searched for that in Google, and found Uncyclopedia. Also aaaaaaaaaaa %D0%85m%D1%96ddl%D0%B5 / T - C 13:27, 27 August 2006 (UTC) Insineratehymn I can't really remember how I got here. Hell, I can't even remember what I had for breakfast! The only part I can remember is that I was searching for funny stuff on Google and ended up here. -- 16:50, 27 August 2006 (UTC) Edit: Sorry, forgot to sign. Mr Mega The word of you guys got around to a forum/site I work at for Mutlimedia and Information Writing. I would have completely ignored Uncyclopedia if it wasn't for the HTBFANJS guide. I found that thing to be really cool.--Mr Mega 20:56, 27 August 2006 (UTC) Flammable I remember the day I got here. Chron, then a mere human by our standards decided to do stuff, and then i was here, and there were lasers and explosions and zoom! Kapow. Suddenly, it was Spain! I called my local representative and shazam! I left the dog at the curb and went into the lake. Then I was here. And then I wasn't! All at once, peaches, then fish. Where was the toaster? Tot he left! Zam! Pow! Whoah! And then we were all "Aaaaah!" and then i was all Whoah! And we were here! Chron's a friend of mine. I found this place on his page and started editing. He said there was a vacancy for curmudgeonly admins. I happily obliged.-- 21:40, 27 August 2006 (UTC) Kaizer the Bjorn If I recall correctly, one of my friends who frequents slashdot linked me here about 1 1/2 years ago. Since then I had some good ideas for articles and some better ideas for images. So I made an account. Ever since then, my outlook on the world has changed. If you too are a pessimistic basement-dweller, and you feel like your life needs a change for the better, call now. You have nothing to lose. --:24, 28 August 2006 (UTC) BarryC It was a cold, wet wintery night in december 2005 that I had the (at least what seemed at the time) great idea of mixing my ritalin supppliment with some weird mushrooms I found in the back of my garden the weekend before. Maniacally, I concocted this potion and decided to have a slurp. Alas, instead of having some kind of rush, it merely created a sort of Jekyll and Hyde moment. As I started metamorphosing into a giant cockroach and the floor turned to cheese - I noticed something. Something was writing on the wall - the letters were twisted and contorted, but I could make out a word. One word: Uncyclopedia. I awoke, sweating, on my bed. It was just a dream - no cream of ritalin and mushroom soup at all. That was, until I noticed the potion spilt on the floor. Fortunately for the rest of the people at Uncyclopedia, there have been no dangerous side effects; apart from my inability to stop pathologically lieing and spell to correctly. -. 14:04, 28 August 2006 (UTC) Hawthorn Peebles I can barely admit it, but they say its the first step to being cured. I was a Wikipedian Bastard Operator From Hell. You know all the urban legends about BOFHs that you hear about on the net. Well, I'm here to tell ya its all true. I did it all, conned passwords from the n00bs, stole candy from babies, and tripped little ole ladies. And then one day, they closed the doors to Wikipedia forever. Yeah, I deserved it, but it still hurt just the same. It hurt, not because I liked any of those bastards, but because I was addicted to that substance called Wiki. Shit, I even free based it a few times. And then it was all gone. Days later, right in the middle of a dream, a figure like Chronarion, or Todd Lyons came to me, alot like Jesus during a heat wave, beckoning me forth and hither. I remember in my dream there was this little thing in the corner laughing, and they called it a Splaka. The only thing I can remember saying in my dream was, "What the FUCK?!?" And then swish, it was all over. The next morning, I made about 10 pounds of mashed potatoes for breakfast and created a mashed potato mountain that looked exactly like Castle Rock. I have no idea why. How did I get to Uncyclopedia? SHIT! I thought this was all a nightmare... nooooooooooooooooooooooooooooooooooooooo -- 22:44, 28 August 2006 (UTC) - P.S. Actually, all that was a total lie,... yeah I know pretty believable. The one and only real reason I found my way to here was from an insult posted in a Wikipedia discussion page. Yeah, I'm one of those people that clicks every damn link on a page. Some know-it-all-uber-wikipedian was insulting somebody's whatever and it should be moved to the crap infested Wiki called Uncyclopedia. So, click I followed...-- 03:23, 14 September 2006 (UTC) Rev_zim I was "researching" Flying Spagetti Monsterism, and was somehow linked to the Uncyc entry.:56, 29 August 2006 (UTC) Mandaliet Hi, I'm Mandaliet. Like other people, I was looking at Wikipedia's Bad Jokes and Other Deleted Nonsense about a year ago, and I thought it was really funny. I also thought that it would be neat if there were a wiki that existed solely for this kind of stuff. Then a little while later I saw a link to Uncyclopedia in a Wikipedian's user page. I clicked on the link and liked what I saw, especially after thinking that something like this should exist. (I think I may have even thought of the name Uncyclopedia as something to call that kind of thing. Ah, originality.) In fact, I thought most of it was hilarious. This wore off after a few weeks. Then I started writing a few articles, and one was featured. At this point I had started talking to the admins in IRC, which gave me a different perspective on the whole thing. This caused me to stop writing articles for a while, because my audience had changed; instead of just being concerned about what the average reader would think, I added the admins (many of whom now knew me) to that list, so the way I thought about writing articles was different. I no longer felt I could write just anything, but I had to write something good, especially since one of my articles had been featured and I had to live up to that again, although I haven't had any more features. After a while I sort of lost interest, but then I came back, and now I'm back, and I even write things sometimes. The rest, as they say, is edit. Orion Blastar I created Orion Blastar to be one of my alter-ego Internet characters. A Space Pirate from 4096 AD in the Traveller RPG who travelled back in time to Earth. I made a blog and I was posting funny stuff on various forums etc. I saw an Uncyclopedia article on Slashdot and decided to join after posting as an anonymous IP for a while. I am still trying to be funny elsewhere, but Uncyclopedia seems to be one of the few places that actual get my jokes, and don't find me annoying or a troll or whatever. I had been on Wikipedia before that and Wikibooks and some other Wikis. I really got serious with Wiki in Uncyclopedia though. I even did an alt.suicide.holiday type cyber death in November 2004 when a lot of people online harassed me about my jokes etc on various forums and Internet sites. I found that I can let loose my humor on Uncyclopedia, and the worse that can happen is that my articles get deleted. I learned how to get over that, and not take it too personally. :) --2nd_Lt Orion Blastar (talk) 15:48, 1 September 2006 (UTC) Olipro Slashdot. PantsMacKenzie Uhh, Chron told me about this crazy site he made. So, I stopped by. I read all 15 articles and started adding more. Loogie Taxi. David Gerard Someone mentioned it on IRC #wikipedia, I think. I was gainlessly unemployed at the time. It was a REVELATION, sir, a REVELATION. I think this was way back when it was on mrpalmguru. There were only 4000 or so articles, and they SUCKED. The feature that day was Air guitar and I laughed and laughed. I've been hooked ever since. And, you know, it's a wonderful respite from fucking suicidally braindamaged Wikimedia politics to come to a wonderfully well-run fascist adhocracy like Uncyclopedia. There was something I was going to add here but I've forgotten it - David Gerard 14:48, 4 September 2006 (UTC) Severian I somehow was google searcing for something in April of 2006 and I found Kitten Huffing. I was getting bored with school and decided to make a stupid article about my school. I think I need to huff a cat...I havn't done it in so long, and I miss it....the addiction still burns.--)}" > 10:50, 6 September 2006 (UTC) gwax A friend pointed me here at a time in my life when I had a lot of free time. --Sir gwax (talk) 14:16, 6 September 2006 (UTC) Hrodulf, a short retrospective (abridg'd) I started as a registered user on April 1, 2006, under a different username. Prior to that, I had made a small number of changes and articles as an ip. This is the earliest edit I could find by me as an ip that still survives on the site. I simply don't remember how I learned about Uncyclopedia. I was interested in Wikipedia first, so it's possible I learned about it through Wikipedia. Early on as a raw n00b (I still consider myself somewhat n00b-ish) I learned the Uncyc gospel of "write a short piece of crap here that isn't funny to anyone but you, and it will get huffed within a minute." In response to this, I ended up getting into a minor argument about the use of NRV, which eventually (due to more discussions of this nature) led to Uncyclopedia:How To Get Started Editing, arguably my first contribution to the site that was actually worth something to anybody (although it's much improved now from the original version, by both myself and others). That's about it. I created an advice column that's sort of floundering now, need to get back into running that the right way (work has been riding my ass lately big time). I am pleased with the unnews material I've managed to come up with; I always liked the onion, and it's fun that here I get the opportunity to write that kind of satirical news-type material that reflects my opinions on the current lousy world situation in a humorous way. All and all, I'm glad I ended up here. --Hrodulf 02:19, 20 September 2006 (UTC) Armando Perentie Like a Queensland cop, it beats the shit out of me. It wasn't all that long ago, but I'll be buggered if I can remember how I stumbled here. Glad I did, though. -- 12:41, 2 November 2006 (UTC) DWIII Purely by chance 1.5 years ago through a hyperlink to You have two cows, from a linkboard that specializes in off-the-wall stuff. Since I am a great admirer (though not a contributor yet) of Wikipedia, and had previously written short parody/satire pieces on occasion, I instantly fell in lust with the concept(!). I consider meself extremely fortunate to have gotten in this close to the ground floor, and hope to continue contributing to this whopping edifice of literary mayhem. --DW III 06:06, 4 November 2006 (UTC) SonicChao Through the Wikipedia article on parody, then after clicking a random link Uncyclopedia, and eventually turning up:17, 4 November 2006 (UTC) Nintendorulez I honestly don't remember. BJAODN, maybe? I dunno... --User:Nintendorulez 13:37, 4 November 2006 (UTC) Mordillo I saw an article in [3] about the Hebrew version of Uncync, checked it, found it to be horrible, and came to check the original, got my first article huffed by Famine, you know,the, 4 November 2006 (UTC) Sikon Saw a link on Wikipedia, somewhere around BJAODN. - User:Guest/sig 18:42, 4 November 2006 (UTC) alchemist I was searching wikipedia yesterday for "toot flute". No articles! As I was slightly away with the faeries at the time I thought I'd see if "ring stinger" (a very hot curry) and "chocolate starfish" (Anat: The bit that stings.) were included. Ha! They call it an encyclopaedia (Brit: we just like our a's - probably due to the Public School System.) Anyway, just started hitting the StumbleUpon button in firefox [4] and found you! PS Do try the link esp. if you have no life. PPS Constructive critisism is always welcome - just not by me (I spell it my way and you, well, are wrong :-).--Alchemist 21:57, 4 November 2006 (UTC) Weatherman1126I was cleaning up some templates on the vandalism templates, and I happened to come across this now deleted template, and decided to check this place out. It's gneomI I started with Wikipedia in July this year. Joined some projects, edits, talked bullshit constructive stuff, yadda yadda yadda. But then I got bored because it is a boring and too serious thing. And the jokes are lame too. So I migrated here. It's gneomI 06:44, 24 November 2006 (UTC) KWild Someone posted a link to Uncyclopedia from the Net Trash thread on the DAF. I think it was an article on the French... Tuck99 Through the wikipedia page Gustav I was Googlin Thomas Hobbes and found this. Screwed around on here for a little bit and then just decided to make an account. I still have no idea what I'm doing here. love, gustav talk at menope 20:18, 30 November 2006 (UTC) Tooltroll Um. . . April or May-ish 2006. I just sort of wandered in one day, looking for a public washroom, and nobody ever bothered to kick me out after I pissed on a bunch of things.User:Tooltroll/sig 11:41, 1 December 2006 (UTC) Braydie What's Uncyclopedia? --—Braydie 12:33, 1 December 2006 (UTC) kjhf I seem to be way behind... Nov 2006, I used Wikipedia so much, I thought I'd give something back. I told my friends and they said "Wikipedia Sucks, Uncyclopedia is much better" So I went there. I searched "chav" becuase they were the most annoying thing in my life then and I noticed Uncyc. was on my side. I laughed for about 10minutes and have been here since. User:Kjhf/sig 13:41, 21 January 2007 (UTC) Jack Mort I checked my edit history to see when i arrived, turns out i've been here for a whole year. i think a friend sent me a link to an uncyc article, but i can't for the life of me remember which one -:37, 21 January 2007 (UTC) User:Can of Worms I forgot how I got to here. I only remember that I knew of Wikipedia before going to Uncyclopedia.Can of Worms 20:55, 21 January 2007 (UTC) Premier Tom Mayfair I was googlin' Camera and came across Camera. Bet you're sorry 'bout keepin' that article. 21:04, 21 January 2007 (UTC) User:DerangedDingo I was using stumble a few months ago and I got to this page that said that, in order to divert vandalism in other articles, everyone should just vandalize the Wikipedia article on chickens. A few days later I wanted to visit it again and instead of just going through my history I googled Funny Chickens Wikipedia. Brought me to uncyclopedia. Been a devoted visitor since, and just a week ago got a membership. DerangedDingo 04:28, 28 January 2007 (UTC) Do not post here. Post here. General Comments / Criticisms / Bribes / Prayers / Praises / etc (place your thing about how you got here ABOVE this line, comments BELOW) So many of our good contributors came from wikipedia. This contradicts the general belief that wikipedia only send us stupid vandals who think this is everythingoespedia. So I think we should conduct subtle campaigns to drive more suitable wikipedians to our site. Don't get me wrong, I'm not talking about an organized campaign, but rather droping coments on wikipedian talk pages like: "what's wrong with you, do you think you are at uncyclopedia? That kind of content is not accepted here", sporadically, when you come to an appropiate uncyclopedian candidate. Today I was passing by Wikipedia:en:Talk:Tlön, Uqbar, Orbis Tertius, a talk page about an article on a tale by Borges. I saw a comment by an annonymous I.P demanding the article to start with lies in the style of Borges, I droped a comment of the kind, and then I realized we should do this more:51, 25 August 2006 (UTC) - How subtle are we talking, because lacking enough subtlety could get someone banned. I think it's fine to drop a link on a talk page every once in awhile, but it's a bad idea to do much more than that. After all, we don't want a bad rep. Actually, I'd recommend more uncyc users that use IRC to hang out in #wikipedia or one of the associated channels. Their are alot of funny people in there, and I'm sure it wouldn't take much to convince them to take a look. Don't show them any of the bad articles:20, 25 August 2006 (UTC) - Yeh, only every once in a while. But we have to aware so to spot the right occasions. --:33, 26 August 2006 (UTC) Maybe this isn't the most subtle approach. Uncyclopedia has its place, as a repository for text which is amusing but a little too silly for the tastes of some Wikipedians. Pages like UnSource:List of acronyms and fr:Lapin rose are Wikipedia text which have been tagged GFDL and given a home after Wikipedia didn't want them in their original catégorie:article insolite forms. There are links there that say that the silly version has been moved here, and attribution back - fair enough as the stuff originated there, not here. They have their place for us (as a source of background info, as one needs to be familiar with a topic before being able to joke about it) and we have our place for them (as somewhere to move silliness that didn't quite look pedantic, serious or encyclopædia-like in tone). As much as I believe the two must retain some level of independence (so am in favour of creating a project for a Canadian Wikimedia Chapter but am at the same time opposed to the idea of persons outside Uncyclopedia controlling domains or other identifiers for this project). I don't see the two as natural enemies, just as different enough in origin and purpose that eah must follow and evolve along its own independent path. --Carlb 00:50, 26 August 2006 (UTC) - Notice that it's their users (some specific type of them) I'm saying we need to bring, not their rejected content (we don't want to be their dumping ground). Wikipedia and Uncyc have many things in common. Same skills needed to contribute through the same software and both atract intelligent highly educated people who like to write (both too atract other kind of people not always desired in both sites, but that's a different story). The difference is what kind of stuff we like to write. So when we find a wikipedian who wants to write the same stuff that we do, then we have the ideal uncyclopedian (if he writes good, of course). About the independece and the domain thing I'll anwer in your talkpage--:16, 26 August 2006 (UTC) - If you want to encourage more visitors (presumbably as a snub to the site that shall not be named) to the site, and want them to see the good articles, then surely would this be not a good time to start ordering printed Uncyclopedias? - Keitei tried to get Uncyclopedia in print going, but as far as I know, nobody cared. I throughly recommend Uncyclopedia Americana. -. 13:26, 29 August 2006 (UTC) The thing that really depresses me is that there is no EverythingGoesPedia. I made a proposal for a site called "Vandalpedia" which would supposedly be totally dedicated to wiki vandalism with absolutely no redeeming value whatsoever, so that vandalous "contributions" to all other wikis could be directed there instead, but the Wikia staff rejected it. --Nerd42Talk 18:13, 26 August 2006 (UTC) - If you want it, create it. Wikia isn't the only game in town... --66.102.74.165 16:20, 27 August 2006 (UTC) - The site that goes unnamed is roughly for that purpose, but they have nazi dictator admins that remove stuff. Their admins are just vandals with special deletion abilities... » Brig Sir Dawg | t | v | c » 21:21, 27 August 2006 (UTC) - Oh, but compared to that site that goes unnamed, a Vandalpedia would represent an improvement. Then again, a landfill site would represent an improvement compared to that mess. --Carlb 15:11, 28 August 2006 (UTC) - We should have an Uncyclopedia Dump where we put all of our deleted crap, like the "1000 Typing... Hamlet... thing" except make it a real wiki and devise some sort of way to move everything that has been deleted to there. The more I think about it the more I approve of it... And then, if an article like... got improved for some reason, it could be brought back:11, 30 August 2006 (UTC) - Would you really want to comb through that, on the off chance that a wiki for vandals of crap would generate an improved page? Don't we already have ED? <rimshot> Thank you I'm here all week...<exit stage right>.--Sir Modusoperandi Boinc! 02:24, 30 August 2006 (UTC) - That sounds like a good idea. We could just have a Vandal: namespace or something. --User:Nintendorulez 21:04, 4 November 2006 (UTC) Oh yeah sure I could install mediawiki on a USB stick and call it "vandalpedia" but nobody would care. --Nerd42Talk 20:24, 30 August 2006 (UTC) - ...or you could install poop on a stick and call it ED, but still nobody would care because it's been done... --Carlb 14:56, 31 August 2006 (UTC) The main thing to do in recruiting Wikipedians is to find ones who can write well and have a sense of humour. (Humor will do in a pinch.) A particularly scurvy trick is to tell them the wiki politics suck less here. I am particularly proud to have the same article in both wikis be a front-page feature in each - David Gerard 14:54, 4 September 2006 (UTC) - I'm guessing it wasn't Cancer porn. --KATIE!! 08:31, 5 September 2006 (UTC) - Perhaps it will be Kevin J. Anderson - David Gerard 10:15, 5 September 2006 (UTC) - I really suck then, I can't get a main page feature in either one...-- 03:27, 14 September 2006 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:How_Did_You_Get_to_Uncyclopedia%3F
CC-MAIN-2016-07
refinedweb
7,357
71.55
Opened 4 years ago Closed 4 years ago #26854 closed enhancement (fixed) Test for certificates in finite posets Change History (14) comment:1 Changed 4 years ago by - Cc chapoton added comment:2 follow-up: ↓ 3 Changed 4 years ago by sage: from sage.misc.sageinspect import sage_getargspec sage: sage_getargspec(LatticePoset().is_subdirectly_reducible).args ['self', 'certificate'] comment:3 in reply to: ↑ 2 Changed 4 years ago by Thanks, that was fast. So the code can be something like from sage.misc.sageinspect import sage_getargspec props = ['is_distributive', 'is_self_dual', 'is_modular'] L = posets.RandomLattice(10, 0.99) for p in props: f = attrcall(p) if 'certificate' in sage_getargspec(getattr(L, p)).args: if attrcall(p)(L) != attrcall(p, certificate=True)(L)[0]: print("Oh no!") comment:4 Changed 4 years ago by comment:5 Changed 4 years ago by - Branch set to u/jmantysalo/test_for_certificates_in_finite_posets comment:6 Changed 4 years ago by - Commit set to cfb20e029808fdb76371f5ad8b26ad7933c4667a Untested code for lattices. More to be done for general posets. New commits: comment:7 Changed 4 years ago by - Commit changed from cfb20e029808fdb76371f5ad8b26ad7933c4667a to 9c5fe515a52924b77a18b5ea6dc218f0af9a0037 Branch pushed to git repo; I updated commit sha1. New commits: comment:8 Changed 4 years ago by - Dependencies set to #26861, #26857, #26847 - Status changed from new to needs_review comment:9 Changed 4 years ago by This should now be ready to review, as all dependencies have been solved. comment:10 Changed 4 years ago by - Milestone changed from sage-8.5 to sage-8.6 - Reviewers set to Travis Scrimshaw - Status changed from needs_review to positive_review LGTM. comment:11 Changed 4 years ago by - Status changed from positive_review to needs_work I think pyflakes spotted an error: +src/sage/tests/finite_poset.py:16: dictionary key 'doubling_interval' repeated with different values +src/sage/tests/finite_poset.py:17: dictionary key 'doubling_interval' repeated with different values comment:12 Changed 4 years ago by - Status changed from needs_work to positive_review That is both not in these changes and not a reason to block a positive review. comment:13 Changed 4 years ago by sorry. comment:14 Changed 4 years ago by - Branch changed from u/jmantysalo/test_for_certificates_in_finite_posets to 9c5fe515a52924b77a18b5ea6dc218f0af9a0037 - Resolution set to fixed - Status changed from positive_review to closed Note: See TracTickets for help on using tickets. How to do this automatically and be py3-compatible? With I can see that there is a certificate-option, but "Python 3.5+ recommends inspect.signature()."
https://trac.sagemath.org/ticket/26854
CC-MAIN-2022-33
refinedweb
398
56.45
Generics Video Tutorial To better understand the advantages and disadvantages of generics, it is better you read the below 2 articles first. 1. Click here to read about Advantages and disadvantages of Arrays 2. Click here to read about Advantages and disadvantages of System.Collections In Microsoft.NET version 1 a. It is always good to use generics rather than using ArrayList,Hashtable etc, found in System.Collections namespace. The only reason why you may want to use System.Collections is for backward compatibility. I cannot think of any disadvantages of using generics at the moment. Please feel free to comment if you are aware of any disadvantages. The screen shot below shows, the generics collection classes and their respective non generic counterparts. Generics has some incompatibility with Nunits. Nice Blog:-) but when we declare generics as list=new list () we are using two types. What is the class? How to create the object of the class? & How to save that object in a file? Limitation of generics: In generic function we can't use relation operators. Hi Can anyone explain what is the use of IList recently i have seen people are using IList than List, I have searched on google but no good explanation Thank u My sincere appreciation on your work and thankful for your presentations on SqlServer. Venkat, All your videos are just Awesome Man :) Tonnes of THANKSSS to youuu
http://venkatcsharpinterview.blogspot.com/2011/05/advantages-and-disadvantages-of-using.html
CC-MAIN-2018-26
refinedweb
234
67.86
Is HashMap.KeySet() thread safe ? Hi Friends , In this code public class Test{ private final Map map= new HashMap(); public synchronized Set getAllData() { return map.keySet(); } } is keyset thread safe? I believe this is more prone to concurrent modification exception as the keyset will hold the snapshot of the key and values in the memory and in the mean while if data is changed underlying by other thread then we will get concurrent modification exception. here i have used synchronized on the method but which takes a lock on the Test Object . Though i have synchronized the method I believe right way here to synchronize the whole map hence I feel this code is a serious problem. Please correct me if i am wrong. The Set returned is not thread safe. Set does not produce a ConcurrentModificiationException but an Iterator can. Note: you can produce a CME with just one thread so its not just a thread safety issue. In this case you can only make the Set therad safe by; (map.keySet()); - taking a copy of it return new HashSet - or use a ConcurrentHashMap map= new ConcurrentHashMap (); getAllData() { private final Map public Set return map.keySet(); }
https://www.java.net/node/688811
CC-MAIN-2015-40
refinedweb
197
63.59
RUN .sql file from command line syntax error [duplicate] - postgresql I'm running this command from PostgreSQL 9.4 on Windows 8.1: psql -d dbname -f filenameincurrentdirectory.sql The sql file has, for example, these commands: INSERT INTO general_lookups ("name", "old_id") VALUES ('Open', 1); INSERT INTO general_lookups ("name", "old_id") VALUES ('Closed', 2);` When I run the psql command, I get this error message: psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "ÿ_I0811a2h1" LINE 1: ÿ_I0811a2h1 ru How do I import a file of SQL commands using psql? I have no problems utilizing pgAdmin in executing these sql files. If your issue is BOM, Byte Order Marker, another option is sed. Also kind of nice because if BOM is not your issue it is non-destructive to you data. Download and install sed for windows: The package called "Complete package, except sources" contains additional required libraries that the "Binaries" package doesn't. Once sed is installed run this command to remove the BOM from your file: sed -i '1 s/^\xef\xbb\xbf//' filenameincurrentdirectory.sql Particularly useful if you file is too large for Notepad++ Okay, the problem does have to do with BOM, byte order marker. The file was generated by Microsoft Access. I opened the file in Notepad and saved it as UTF-8 instead of Unicode since Windows saves UTF-16 by default. That got this error message: psql:filenameincurrentdirectory.sql:1: ERROR: syntax error at or near "INSERT" LINE 1: INSERT INTO general_lookups ("name", "old_id" ) VAL... I then learned from another website that Postgres doesn't utilize the BOM and that Notepad doesn't allow users to save without a BOM. So I had to download Notepad++, set the encoding to UTF-8 without BOM, save the file, and then import it. Voila! An alternative to using Notepad++ is this little python script I wrote. Simply pass in the file name to convert. import sys if len(sys.argv) == 2: with open(sys.argv[1], 'rb') as source_file: contents = source_file.read() with open(sys.argv[1], 'wb') as dest_file: dest_file.write(contents.decode('utf-16').encode('utf-8')) else: print "Please pass in a single file name to convert." Related How to pass string via STDIN into terminal command being executed within python script? I need to generate postgres schema from a dataframe. I found csvkit library to come closet to matching datatypes. I can run csvkit and generate postgres schema over a csv on my desktop via terminal through this command found in docs: csvsql -i postgresql myFile.csv csvkit docs - And I can run the terminal command in my script via this code: import os a=os.popen("csvsql -i postgresql Desktop/myFile.csv").read() However I have a dataframe, that I have converted to a csv string and need to generate schema from the string like so: csvstr = df.to_csv() In the docs it says that under positional arguments: The CSV file(s) to operate on. If omitted, will accept input on STDIN How do I pass my variable csvstr into the line of code a=os.popen("csvsql -i postgresql csvstr").read() as a variable? I tried to do the below line of code but got an error OSError: [Errno 7] Argument list too long: '/bin/sh': a=os.popen("csvsql -i postgresql {}".format(csvstr)).read() Thank you in advance You can't pass such a big string via commandline! You have to save the data to a file and pass its path to csvsql. import csv csvstr = df.to_csv() with open('my_cool_df.csv', 'w', newline='') as csvfile: csvwriter= csv.writer(csvfile) csvwriter.writerows(csvstr) And later: a=os.popen("csvsql -i postgresql my_cool_df.csv") How to gzip file with unicode encoding using linux cmd prompt? I have large tsv format file(30GB). I have to transform all those data to google bigquery. So I split the files into smaller chunks and gzip all those chunk files and moved to google cloud storage. After that I have calling google bigquery api to load data from GCS. But I have facing following encoding error. file_data.part_0022.gz: Error detected while parsing row starting at position: 0. Error: Bad character (ASCII 0) encountered. (error code: invalid) I am using following unix commands in my python code for splitting and gzip tasks. cmd = [ "split", "-l", "300000", "-d", "-a", "4", "%s%s" % (<my-dir>, file_name), "%s/%s.part_" % (<my temp dir>, file_prefix) ] code = subprocess.check_call(cmd) cmd = 'gzip %s%s/%s.part*' % (<my temp dir>,file_prefix,file_prefix) logging.info("Running shell command: %s" % cmd) code = subprocess.Popen(cmd, shell=True) code.communicate() Files are successfully splitted and gziped (file_data.part_0001.gz, file_data.part_0002.gz, etc..) but when I try to load these files to bigquery it throws above error. I understand that was encoding issue. Is there any way to encoding files while split and gzip operation? or we need to use python file object to read line by line and do unicode encoding and write it to new gzip file?(pythonic way) Reason: Error: Bad character (ASCII 0) encountered Clearly states you have a unicode (UTF-16) tab character there which cannot be decoded. BigQuery service only supports UTF-8 and latin1 text encodings. So, the file is supposed to be UTF-8 encoded. Solution: I haven't tested it. Use the -a or --ascii flag with gzip command. It'll be decoded ok by bigquery. How to import a text file with '|' delimited data to PostgreSQL database? [closed] I have a text file with | delimited data that I want to import to a table in PostgreSQL database. PgAdminIII only exports CSV files. I converted the file to a CSV file but still was unsuccessful importing data to PostgreSQL database. It says an error has occurred: Extradata after last expected column. CONTEXT: COPY <file1>, line1: What I am doing wrong here? Using the standard psql shell you can do this: \copy table_name from 'filename' delimiter '|' In the shell you can do \h copy to see more options and the complete syntax. Of course the manual about COPY is also worthwhile reading. How to import csv data into postgres table I tried to import csv file data into postgres table. Running the following line as pgscript in pgAdmin \copy users_page_rank FROM E'C:\\Users\\GamulinN\\Desktop\\users page rank.csv' DELIMITER ';' CSV it returned an error: [ERROR ] 1.0: syntax error, unexpected character Does anyone know what could be wrong here? I checked this post but couldn't figure out what's the problem. To import file into postgres with COPY you need one of the following: 1) Connect with psql to the DB and run your comand: \copy users_page_rank FROM E'C:\\Users\\GamulinN\\Desktop\\users page rank.csv' DELIMITER ';' CSV It will copy the file from current computer to the table. Details here. 2) Connect with any tool to the DB and run this SQL script: COPY users_page_rank FROM E'C:\\Users\\GamulinN\\Desktop\\users page rank.csv' DELIMITER ';' CSV It will copy the file from the server with postgres to the table. Details here. (With this command you can only COPY from files in postgresql data directory. So you will need to transfer files there first.) PyMySQL UnicodeEncodeError; python shell successes but cmd fails I'm new to pymysql module and trying to discover it, I have a simple code: import pymysql conn=pymysql.connect(host="127.0.0.1", port=8080,user="root", passwd="mysql", db="world", charset="utf8", use_unicode=True) cur=conn.cursor() cur.execute("SELECT * FROM world.city") for line in cur: print(line) cur.close() conn.close() I'm using Python Tools for Visual Studio. When i execute the code, it fails with this error: Traceback (most recent call last): File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensio ns\Microsoft\Python Tools for Visual Studio\1.5\visualstudio_py_debugger.py", li ne 1788, in write self.old_out.write(value) File "C:\Python32\lib\encodings\cp437.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_map)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 6-7: cha racter maps to <undefined> Failing line contains city name : ´s-Hertogenbosch I thought that maybe it's a related problem with cmd output so I've switched to python shell, and my script runs without any error. So what is the problem I'm facing? How can I solve it? I really want to use Python Tools for Visual Studio, so answers that enable me to use PTVS will be most welcomed. The problem probably is that the output encoding of the environment is set to cp437 and the unicode character cannot be converted to that encoding when doing print(line) that probably translates to the self.old_out.write(value). Try to replace the print() inside the loop by writing to the file, like: with open('myoutput.txt', 'w', encoding='utf-8') as f: for line in cur: f.write(line) Well, but the cursor does not return a string line. It return a row (I guess tuple) of elements. Because of that you probably have to do something like that: with open('myoutput.txt', 'w', encoding='utf-8') as f: for row in cur: f.write(repr(row)) This may be enough for a diagnostic purpose. If you need some nicer string, you have to format it in some specific way. Also, you wrote: charset="utf8", use_unicode=True) If the charset is used, then use_unicode=True can be left out (it is implied by using the charset. If I recall correctly, the charset='utf8' is not any recognized encoding for Python. You have to use charset='utf-8' -- i.e. with dash or underscore between utf and 8. Correction: The utf8 probably works as it is one of the aliases. UPDATE based on comments... As the output to a file seems to be OK, the problem is related to the capabilities of the window used for the output of the print command. As the cmd knows only cp437, you have to use or another window (like a Unicode capable window of some GUI), or you have to tell the cmd to use another encoding. See the experience of others. Basically, you have to tell the console: chcp 65001 to change accepted output encoding to UTF-8, or you can use another (non-Unicode) encoding that supports the wanted characters. Also, the console font should be capable to display the characters (i.e. to contain the glyphs, the images of the characters). My guess is the data you're receiving is not in unicode despite the fact that your python script is trying to encode it in Unicode. I would check for database and table spesific charset & collation settings. utf8 & utf8_general_ci are your friends.
https://jquery.developreference.com/article/10000710/RUN+.sql+file+from+command+line+syntax+error+%5Bduplicate%5D
CC-MAIN-2020-40
refinedweb
1,804
67.25
The mbrtoc32() function is defined in <cuchar> header file. size_t mbrtoc32( char32_t* pc32, const char* s, size_t n, mbstate_t* ps); The mbrtoc32() function converts at most n multibyte character represented by s to its equivalent utf-32 character and stores it in the memory location pointed to by pc32. If s represents a null pointer, the values of n and pc32 are ignored and the call to is equivalent to mbrtoc32(NULL, "", 1, ps). If the resulting character produced is null, the conversion state *ps represents the initial shift state. The mbrtoc32() function returns the first of the following value that matches the cases below: char32_tfrom a multi-char32_t character (e.g. a surrogate pair) has now been written to *pc32. No bytes are processed from the input in this case. #include <cstdio> #include <cstdlib> #include <cuchar> #include <iostream> using namespace std; int main(void) { char32_t pc32; char s[] = "x" ; mbstate_t ps; int length; length = mbrtoc32(&pc32, s, MB_CUR_MAX, &ps); if (length < 0) { perror("mbrtoc32() fails to convert"); exit(-1); } cout << "Multibyte string = " << s << endl; cout << "Length = " << length << endl; printf ("32-bit character = 0x%08hx\n", pc32); return 0; } When you run the program, the output will be: Multibyte string = x Length = 1 32-bit character = 0x00000078
https://cdn.programiz.com/cpp-programming/library-function/cuchar/mbrtoc32
CC-MAIN-2019-51
refinedweb
207
55.17
In 2003, Herb Sutter exposed the industry's biggest "dirty little secret" with his "The Free Lunch Is Over" article, demonstrating clearly that the era of ever-faster processors was at an end, to be replaced by a new era of parallelism via "cores" (virtual CPUs) on a single chip. The revelation sent shockwaves through the programming community because getting thread-safe code correct has always remained, in theory if not in practice, the province of high-powered software developers way too expensive for your company to hire. A privileged few, it seemed, understood enough about Java's threading model and concurrency APIs and "synchronized" keyword to write code that both provided safety and throughput ... and most of those had learned it the hard way. It is presumed that the rest of the industry was left to fend for itself, clearly not a desirable conclusion, at least not to the IT departments paying for that software being developed. Like Scala's sister language in the .NET space, F#, Scala stands as one of several purported solutions to the "concurrency problem." In this column, I have touched on several of Scala's properties that make it more amenable to writing thread-safe code such as immutable objects by default and a design preference for returning copies of objects rather than modifying their contents. Scala's support for concurrency reaches far deeper than just this though; it's high time to start poking around in the Scala libraries to see what lives there. Back-to-concurrency basics Before we can get too deep into Scala's concurrency support, it's a good idea to make sure that you have a good understanding of Java's basic concurrency model because Scala's support for concurrency builds, at some level, on top of the features and functionality provided by the JVM and supporting libraries. Toward that end, the code in Listing 1 contains a basic concurrency problem known as the Producer/Consumer problem (as described in the "Guarded Blocks" section of the Sun Java Tutorial). Note that the Java Tutorial version doesn't use the java.util.concurrent classes in its solution, preferring instead to use the old wait()/ notifyAll() methods from java.lang.Object: Listing 1. Producer/Consumer (pre-Java5) package com.tedneward.scalaexamples.notj5; class Producer implements Runnable { private Drop drop; private String importantInfo[] = { "Mares eat oats", "Does eat oats", "Little lambs eat ivy", "A kid will eat ivy too" }; public Producer(Drop drop) { this.drop = drop; } public void run() { for (int i = 0; i < importantInfo.length; i++) { drop.put(importantInfo[i]); } drop.put("DONE"); } } class Consumer implements Runnable { private Drop drop; public Consumer(Drop drop) { this.drop = drop; } public void run() { for (String message = drop.take(); !message.equals("DONE"); message = drop.take()) { System.out.format("MESSAGE RECEIVED: %s%n", message); } } } class Drop { //Message sent from producer to consumer. private String message; //True if consumer should wait for producer to send message, //false if producer should wait for consumer to retrieve message. private boolean empty = true; //Object to use to synchronize against so as to not "leak" the //"this" monitor private Object lock = new Object(); public String take() { synchronized(lock) { //Wait until message is available. while (empty) { try { lock.wait(); } catch (InterruptedException e) {} } //Toggle status. empty = true; //Notify producer that status has changed. lock.notifyAll(); return message; } } public void put(String message) { synchronized(lock) { //Wait until message has been retrieved. while (!empty) { try { lock.wait(); } catch (InterruptedException e) {} } //Toggle status. empty = false; //Store message. this.message = message; //Notify consumer that status has changed. lock.notifyAll(); } } } public class ProdConSample { public static void main(String[] args) { Drop drop = new Drop(); (new Thread(new Producer(drop))).start(); (new Thread(new Consumer(drop))).start(); } } Note: The code I present here is slightly modified from the Sun tutorial solution; there's a small design flaw in the code they present (see The Java Tutorial "bug"). The core of the Producer/Consumer problem is a simple one to understand: one (or more) producer entities want to provide data for one (or more) consumer entities to consume and do something with (in this case it consists of printing the data to the console). The Producer and Consumer classes are pretty straightforward Runnable-implementing classes: The Producer takes Strings from an array and puts them into a buffer for the Consumer to take as desired. The hard part of the problem is that if the Producer runs too fast, data will be potentially lost as it is overwritten; if the Consumer runs too fast, data will be potentially double-processed as the Consumer reads the same data twice. The buffer (called the Drop in the Java Tutorial code) must ensure that neither condition occurs. Not to mention that there is no potential for data corruption (hard in the case of String references, but still a concern) as messages are put in and taken out of the buffer. A full discussion of the subject is best left to Brian Goetz's Java Concurrency in Practice or Doug Lea's earlier Concurrent Programming in Java (see Resources), but a quick rundown of how this code works is necessary before you apply Scala to it. When the Java compiler sees the synchronized keyword, it generates a try/ finally block in place of the synchronized block with a monitorenter opcode at the top of the block and a monitorexit opcode in the finally block to ensure that the monitor (the Java basis for atomicity) is released regardless of how the code exits. Thus, the put code in Drop gets rewritten to look like Listing 2: Listing 2. Drop.put after compiler helpfulness // This is pseudocode public void put(String message) { try { monitorenter(lock) //Wait until message has been retrieved. while (!empty) { try { lock.wait(); } catch (InterruptedException e) {} } //Toggle status. empty = false; //Store message. this.message = message; //Notify consumer that status has changed. lock.notifyAll(); } finally { monitorexit(lock) } } The wait() method tells the current thread to go into an inactive state and wait for another thread to call notifyAll() on that object. The thread just notified must then attempt to acquire the monitor again after which point it is free to continue execution. In essence, wait() and notify()/ notifyAll() act as a simple signaling mechanism, allowing the Drop to coordinate between the Producer and the Consumer threads, one take to each put. The code download that accompanies this article uses the Java5 concurrency enhancements (the Lock and Condition interfaces and the ReentrantLock lock implementation) to provide timeout-based versions of Listing 2, but the basic code pattern remains the same. That is the problem: Developers who write code like in Listing 2 have to focus too exclusively on the details, the low-level implementation code, of the threading and locking required to make it all work correctly. What's more, developers have to reason about each and every line in the code, looking to see if it needs to be protected because too much synchronization is just as bad as too little. Now let's look at Scala alternatives. Good old Scala concurrency (v1) One way to start working with concurrency in Scala is to simply translate the Java code directly over to Scala, taking advantage of Scala's syntax in places to simplify the code, a least a little: Listing 3. ProdConSample (Scala) { var message : String = "" var empty : Boolean = true var lock : AnyRef = new Object() def put(x: String) : Unit = lock.synchronized { // Wait until message has been retrieved await (empty == true) // Toggle status empty = false // Store message message = x // Notify consumer that status has changed lock.notifyAll() } def take() : String = lock.synchronized { // Wait until message is available. await (empty == false) // Toggle status empty=true // Notify producer that staus has changed lock.notifyAll() // Return the message message } private def await(cond: => Boolean) = while (!cond) { lock.wait() } } def main(args : Array[String]) : Unit = { // Create Drop val drop = new Drop(); // Spawn Producer new Thread(new Producer(drop)).start(); // Spawn Consumer new Thread(new Consumer(drop)).start(); } } The Producer and Consumer classes are almost identical to their Java cousins, again extending (implementing) the Runnable interface and overriding the run() method, and — in Producer's case — using the built-in iteration method for each to walk the contents of the importantInfo array. (Actually, to make it more like Scala, importantInfo should probably be a List instead of an Array, but in this first pass, I want to keep things as close to the original Java code as possible.) The Drop class also looks similar to the Java version except that in Scala, "synchronized" isn't a keyword, it's a method defined on the class AnyRef, the Scala "root of all reference types." This means that to synchronize on a particular object, you simply call the synchronize method on that object; in this case, on the object held in the lock field on Drop. Note that we also make use of a Scala-ism in the Drop class in the definition of the await() method: The cond parameter is a block of code waiting to be evaluated rather than evaluated prior to being passed in to the method. Formally in Scala, this is known as "call-by-name"; here it serves as a useful way of capturing the conditional-waiting logic that had to be repeated twice (once in put, once in take) in the Java version. Finally, in main(), you create the Drop instance, instantiate two threads, kick them off with start(), and then simply fall off of the end of main(), trusting that the JVM will have started those two threads before you finish with main(). (In production code, this probably shouldn't be taken for granted, but for a simple example like this, it's going to be OK 99.99 percent of the time. Caveat emptor.) However, having said all that, the same basic problem remains: Programmers still have to worry way too much about the issues of signaling and coordinating the two threads. While some of the Scala-isms might make the syntax easier to live with, it's not really an incredibly compelling win so far. Scala concurrency, v2 A quick look at the Scala Library Reference reveals an interesting package: scala.concurrency. This package contains a number of different concurrency constructs, including the first one we're going to make use of, the MailBox class. As its name implies, MailBox is essentially the Drop by itself, a single-slot buffer that holds a piece of data until it has been retrieved. However, the big advantage of MailBox is that it completely encapsulates the details of the sending and receiving behind a combination of pattern-matching and case classes, making it more flexible than the simple Drop (or the Drop's big multi-slot data-holding brother, java.util.concurrent.BoundedBuffer). Listing 4. ProdConSample, v2 (Scala) package com.tedneward.scalaexamples.scala.V2 { import concurrent.{MailBox, ops} new Thread(new Producer(drop)).start(); // Spawn Consumer new Thread(new Consumer(drop)).start(); } } } The only difference here between v2 and v1 is in the implementation of Drop, which now makes use of the MailBox class to handle the blocking and signaling of messages coming in and being removed from the Drop. (We could have rewritten Producer and Consumer to use the MailBox directly, but for simplicity's sake, I assume that we want to keep the Drop API consistent across all the examples.) Using a MailBox is a bit different from the classic BoundedBuffer ( Drop) that we've been using, so let's walk through that code in detail. MailBox has two basic operations: send and receive. The receiveWithin method is simply a timeout-based version of receive. MailBox takes messages that can be of any type whatsoever. The send() method essentially drops the message into the mailbox, notifying any pending receivers immediately if it's of a type they care about, and appending it to a linked list of messages for later retrieval. The receive() method blocks until a message appropriate to the function block that's passed in to it is received. Therefore, in this situation, we create two case classes, one containing nothing ( Empty) that indicates the MailBox is empty and one containing the data ( Full) with the message data in it. - The putmethod, because it is putting data into the Drop, calls receive()on the MailBoxlooking for an Emptyinstance, thus blocking until Emptyhas been sent. At this point, it sends a Fullinstance to the MailBoxcontaining the new data. - The takemethod, because it is removing data from the Drop, calls receive()on the MailBoxlooking for a Fullinstance, extracts the message (again thanks to pattern-matching's ability to extract values from inside the case class and bind them to local variables), and sends an Emptyinstance to the MailBox. No explicit locking required, and no thinking about monitors. Scala concurrency, v3 In fact, we can shorten the code up considerably if it turns out that Producer and Consumer don't really have to be full-fledged classes at all (which is the case here) — both are essentially thin wrappers around the Runnable.run() method, which Scala can do away with entirely by using the scala.concurrent.ops object's spawn method, like in Listing 5: Listing 5. ProdConSample, v3 (Scala) package com.tedneward.scalaexamples.scala.V3 { import concurrent.MailBox import concurrent.ops._ object ProdConSample { spawn { val importantInfo : Array[String] = Array( "Mares eat oats", "Does eat oats", "Little lambs eat ivy", "A kid will eat ivy too" ); importantInfo.foreach((msg) => drop.put(msg)) drop.put("DONE") } // Spawn Consumer spawn { var message = drop.take() while (message != "DONE") { System.out.format("MESSAGE RECEIVED: %s%n", message) message = drop.take() } } } } } The spawn method (imported via the ops object just at the top of the package block) takes a block of code (another by-name parameter example) and wraps it inside the run() method of an anonymously-constructed thread object. In fact, it's not too difficult to understand what spawn's definition looks like inside of the ops class: Listing 6. scala.concurrent.ops.spawn() def spawn(p: => Unit) = { val t = new Thread() { override def run() = p } t.start() } ... which once again highlights the power of by-name parameters. One drawback to the ops.spawn method is the basic fact that it was written in 2003 before the Java 5 concurrency classes had taken effect. In particular, the java.util.concurrent.Executor and its ilk were created to make things easier for developers to spawn threads without having to actually handle the details of creating thread objects directly. Fortunately, spawn's definition is simple enough to recreate in a custom library of your own, making use of Executor (or ExecutorService or ScheduledExecutorService) to do the actual launching of the thread. In fact, Scala's support for concurrency goes well beyond the MailBox and ops classes; Scala also supports a similar concept called "Actors," which uses a similar kind of message-passing approach that the MailBox uses, but to a much greater degree and with much more flexibility. But that's for next time. Conclusion Scala provides two levels of support for concurrency, much as it does for other Java-related topics: - The first, full access to the underlying libraries (such as java.util.concurrent) and support for the "traditional" Java concurrency semantics (such as monitors and wait()/ notifyAll()). - The second, a layer of abstraction on top of those basic mechanics, as exemplified by the MailBoxclass discussed in this article and the Actors library that we'll discuss in the next article in the series. The goal is the same in both cases: to make it easier for developers to focus on the meat of the problem rather than having to think about the low-level details of concurrent programming (obviously the second approach achieves that better than the first, at least to those who aren't too deeply invested in thinking at the low-level primitives level). One clear deficiency to the current Scala libraries, however, is the obvious lack of Java 5 support; the scala.concurrent.ops class should have operations like spawn that make use of the new Executor interfaces. It should also support versions of synchronized that make use of the new Lock interfaces. Fortunately, these are all library improvements that can be done at any point during Scala's life cycle without breaking existing code; they can even be done by Scala developers themselves without having to wait for Scala's core development team to provide it to them (all it takes is a little time). Download Resources Learn - The busy Java developer's guide to Scala (Ted Neward, developerWorks): Read the complete series. - Java Concurrency in Practice (Brian Goetz, Addison-Wesley Professional, 2006) covers from the basics to advanced-level topics on concurrency. - Concurrent Programming in Java (Doug Lea, Prentice Hall PTR, 1999) surveys a wide field of research in parallelism and concurrency and shows how to do more with multithreading in Java programming with dozens of patterns and design tips. - .
https://www.ibm.com/developerworks/java/library/j-scala02049/
CC-MAIN-2015-35
refinedweb
2,829
50.57
Note: Since this was first posted, foreach has been deprecated in favor of for, but the syntax and behavior are the same. I have modified this example and the post to reflect this. I knew I couldn't fool you twice! Just as if/else can be a statement or an expression, for can be either as well. Compile and run the following example to experience this for yourself: import java.lang.System; class CompiledSequenceExample { attribute ordinals:String[] = ["zero", "one", "two", "three", "four", "five", "six"]; function printSomeOrdinals():Void { for (lmnt in ordinals where lmnt.length() > 3) { System.out.print("{lmnt} "); } System.out.println(); } function getSomeOrdinals():String[] { var elements = for (lmnt in ordinals where lmnt.length() > 3) lmnt; return elements; } } var example = CompiledSequenceExample { }; example.printSomeOrdinals(); System.out.println(example.getSomeOrdinals()); The foreach and for Keywords Have Been Simplified into Just for The first occurrence of for in this example is a statement, very similar to the way for is used in interpreted JavaFX Script. The second occurrence of for in this example is an expression, known as a sequence comprehension, which has a value that is assigned to the elements variable. This form of for has three parts: - One or more input sequences (the ordinalssequence in this example). - An optional filter ( where lmnt.length() > 3in this example). - An expression (lmntin this example) This value of a for sequence comprehension is a sequence consisting of zero or more elements. The type of sequence is determined by the type of the expression ( String in this example, as lmnt is implicitly typed as a String). Consider the following modification to the getSomeOrdinals() function in this example: function getSomeOrdinals():Integer[] { var elements = for (lmnt in ordinals where lmnt.length() > 3) lmnt.length(); return elements; } In this case the value of the for expression is a sequence of type Integer, whose elements consist of the length of each of the strings in the ordinals sequence that are longer than 3 characters. The final two lines of the program call the functions that contain the for keywords, each printing the element values that meet the criteria of having a length greater than 3. Here is the console output of the original program: zero three four five [ zero, three, four, five ] And here is the console output of the program with the modification shown above: zero three four five [ 4, 5, 4, 4 ] Note: Since this last posted, Patrick Wright posed the following question: "Are the return in getSomeOrdinals() and the local variable necessary? I'm wondering if in this case the last statement in the block would be returned automatically, and one could just have the function block include the for." That is a very insightful question, and the answer is that they are not necessary. The getSomeOrdinals() function could be written as follows: function getSomeOrdinals():Integer[] { for (lmnt in ordinals where lmnt.length() > 3) lmnt.length(); } This works because the value of a block expression is the value of its last expression. See the Express Yourself in Compiled JavaFX Script post for more details on block expressions. If you have any additional questions on this example, please post a comment, as they prove very helpful in prompting me to provide additional info. Regards, Jim Weaver JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-side Applications Immediate eBook (PDF) download available at the book's Apress site Patrick, Thanks for your comment/question. You are correct. The last statement in the block would be returned automatically, and one could just have the function block include the foreach. I added a code snippet in this post to illustrate your point. Thanks, Jim Weaver Posted by: Jim Weaver | December 06, 2007 at 11:23 AM Thanks for the answer, Jim. It's clearer now on a re-read. One followup--are the return in getSomeOrdinals() and the local variable necessary? I'm wondering if in this case the last statement in the block would be returned automatically, and one could just have the function block include the foreach. Thanks Patrick Posted by: Patrick Wright | December 05, 2007 at 03:43 PM Thanks for your feedback, Patrick. I have expounded on the foreach expression, so please let me know if further clarification would be helpful. -Jim Posted by: Jim Weaver | December 04, 2007 at 09:37 AM Hi Jim Some clarification of the expression syntax in this case would be helpful. The loop is clear, but it's not clear what the effect is of having "element" as the sole expression within the loop, and how this ends up appending to the sequence elements. Will foreach always generate a sequence based on the last value in the foreach block? Or can I also create a single value--for example, could I use this foreach expression syntax to create a single string containing all the elements, comma-separated? Interesting comparison, thanks. Patrick Posted by: Patrick Wright | December 04, 2007 at 01:55 AM
http://learnjavafx.typepad.com/weblog/2007/12/compiled-javafx.html
CC-MAIN-2016-40
refinedweb
823
53.41
- What Are Threads? - Interrupting Threads - Thread Properties - Thread Priorities - Selfish Threads - Synchronization - Deadlocks - User Interface Programming with Threads - Using Pipes for Communication between Threads You are probably familiar with multitasking: the ability to have more than one program working at what seems like the same time. For example, you can print while editing or sending a fax. Of course, unless you have a multiple-processor machine, what is really going on is that the operating system is doling out resources to each program, giving the impression of parallel activity. This resource distribution is possible because while you may think you are keeping the computer busy by, for example, entering data, most of the CPU's time will be idle. (A fast typist takes around 1/20 of a second per character typed, after all, which is a huge time interval for a computer.) Multitasking can be done in two ways, depending on whether the operating system interrupts programs without consulting with them first, or whether pro-grams are only interrupted when they are willing to yield control. The former is called preemptive multitasking; the latter is called cooperative (or, simply, nonpreemptive) multitasking. Windows 3.1 and Mac OS 9 are cooperative multitasking systems, and UNIX/Linux, Windows NT (and Windows 95 for 32-bit programs), and OS X are preemptive. (Although harder to implement, preemptive multitasking is much more effective. With cooperative multitasking, a badly behaved program can hog everything.) Multithreaded programs extend the idea of multitasking by taking it one level lower: individual programs will appear to do multiple tasks at the same time. Each task is usually called a threadwhich is short for thread of control. Programs that can run more than one thread at once are said to be multithreaded. Think of each thread as running in a separate context: contexts make it seem as though each thread has its own CPUwith registers, memory, and its own code. So, what is the difference between multiple processes and multiple threads? The essential difference is that while each process has a complete set of its own variables, threads share the same data. This sounds somewhat risky, and indeed it can be, as you will see later in this chapter. But it takes much less overhead to create and destroy individual threads than it does to launch new processes, which is why all modern operating systems support multithreading. Moreover, inter-process communication is much slower and more restrictive than communication between threads. Multithreading is extremely useful in practice. For example, a browser should be able to simultaneously download multiple images. An email program should let you read your email while it is downloading new messages. The Java programming language itself uses a thread to do garbage collection in the backgroundthus saving you the trouble of managing memory! Graphical user interface (GUI) programs have a separate thread for gathering user interface events from the host operating environment. This chapter shows you how to add multithreading capability to your Java applications and applets. Fair warning: multithreading can get very complex. In this chapter, we present all of the tools that the Java programming language provides for thread programming. We explain their use and limitations and give some simple but typical examples. However, for more intricate situations, we suggest that you turn to a more advanced reference, such as Concurrent Programming in Java by Doug Lea [Addison-Wesley 1999]. NOTE In many programming languages, you have to use an external thread package to do multithreaded programming. The Java programming language builds in multithreading, which makes your job much easier. What Are Threads? Let us start by looking at a program that does not use multiple threads and that, as a consequence, makes it difficult for the user to perform several tasks with that program. After we dissect it, we will then show you how easy it is to have this program run separate threads. This program animates a bouncing ball by continually moving the ball, finding out if it bounces against a wall, and then redrawing it. (See Figure 11.) As soon as you click on the "Start" button, the program launches a ball from the upper-left corner of the screen and the ball begins bouncing. The handler of the "Start" button calls the addBall method: public void addBall() { try { Ball b = new Ball(canvas); canvas.add(b); for (int i = 1; i <= 1000; i++) { b.move(); Thread.sleep(5); } } catch (InterruptedException exception) { } } That method contains a loop running through 1,000 moves. Each call to move moves the ball by a small amount, adjusts the direction if it bounces against a wall, and then redraws the canvas. The static sleep method of the Thread class pauses for 5 milliseconds. Figure 11: Using a thread to animate a bouncing ball The call to Thread.sleep does not create a new threadsleep is a static method of the Thread class that temporarily stops the activity of the current thread. The sleep method can throw an InterruptedException. We will discuss this exception and its proper handling later. For now, we simply terminate the bouncing if this exception occurs. If you run the program, the ball bounces around nicely, but it completely takes over the application. If you become tired of the bouncing ball before it has finished its 1,000 bounces and click on the "Close" button, the ball continues bouncing anyway. You cannot interact with the program until the ball has finished bouncing. NOTE If you carefully look over the code at the end of this section, you will notice the call canvas.paint(canvas.getGraphics()) inside the move method of the Ball class. That is pretty strangenormally, you'd call repaint and let the AWT worry about getting the graphics context and doing the painting. But if you try to call canvas.repaint() in this program, you'll find out that the canvas is never repainted since the addBall method has completely taken over all processing. In the next program, where we use a separate thread to compute the ball position, we'll again use the familiar repaint. Obviously, the behavior of this program is rather poor. You would not want the programs that you use behaving in this way when you ask them to do a time-consuming job. After all, when you are reading data over a network connection, it is all too common to be stuck in a task that you would really like to interrupt. For example, suppose you download a large image and decide, after seeing a piece of it, that you do not need or want to see the rest; you certainly would like to be able to click on a "Stop" or "Back" button to interrupt the loading process. In the next section, we will show you how to keep the user in control by running crucial parts of the code in a separate thread. Example 11 is the entire code for the program. Example 11: Bounce.java 1. import java.awt.*; 2. import java.awt.event.*; 3. import java.awt.geom.*; 4. import java.util.*; 5. import javax.swing.*; 6. 7. /** 8. Shows an animated bouncing ball. 9. */ 10. public class Bounce("Bounce"); makes 74. it bounce 1,000 times. 75. */ 76. public void addBall() 77. { 78. try 79. { 80. Ball b = new Ball(canvas); 81. canvas.add(b); 82. 83. for (int i = 1; i <= 1000; i++) 84. { 85. b.move(); 86. Thread.sleep(5); 87. } 88. } 89. catch (InterruptedException exception) 90. { 91. } 92. } 93. 94. private BallCanvas canvas; 95. public static final int WIDTH = 450; 96. public static final int HEIGHT = 350; 97. } 98. 99. /** 100. The canvas that draws the balls. 101. */ 102. class BallCanvas extends JPanel 103. { 104. /** 105. Add a ball to the canvas. 106. @param b the ball to add 107. */ 108. public void add(Ball b) 109. { 110. balls.add(b); 111. } 112. 113. public void paintComponent(Graphics g) 114. { 115. super.paintComponent(g); 116. Graphics2D g2 = (Graphics2D)g; 117. for (int i = 0; i < balls.size(); i++) 118. { 119. Ball b = (Ball)balls.get(i); 120. b.draw(g2); 121. } 122. } 123. 124. private ArrayList balls = new ArrayList(); 125. } 126. 127. /** 128. A ball that moves and bounces off the edges of a 129. component 130. */ 131. class Ball 132. { 133. /** 134. Constructs a ball in the upper left corner 135. @c the component in which the ball bounces 136. */ 137. public Ball(Component c) { canvas = c; } 138. 139. /** 140. Draws the ball at its current position 141. @param g2 the graphics context 142. */ 143. public void draw(Graphics2D g2) 144. { 145. g2.fill(new Ellipse2D.Double(x, y, XSIZE, YSIZE)); 146. } 147. 148. /** 149. Moves the ball to the next position, reversing direction 150. if it hits one of the edges 151. */ 152. public void move() 153. { 154. x += dx; 155. y += dy; 156. if (x < 0) 157. { 158. x = 0; 159. dx = -dx; 160. } 161. if (x + XSIZE >= canvas.getWidth()) 162. { 163. x = canvas.getWidth() - XSIZE; 164. dx = -dx; 165. } 166. if (y < 0) 167. { 168. y = 0; 169. dy = -dy; 170. } 171. if (y + YSIZE >= canvas.getHeight()) 172. { 173. y = canvas.getHeight() - YSIZE; 174. dy = -dy; 175. } 176. 177. canvas.paint(canvas.getGraphics()); 178. } 179. 180. private Component canvas; 181. private static final int XSIZE = 15; 182. private static final int YSIZE = 15; 183. private int x = 0; 184. private int y = 0; 185. private int dx = 2; 186. private int dy = 2; 187. } In the previous sections, you learned what is required to split a program into multiple concurrent tasks. Each task needs to be placed into a run method of a class that extends Thread. But what if we want to add the run method to a class that already extends another class? This occurs most often when we want to add multithreading to an applet. An applet class already inherits from JApplet, and we cannot inherit from two parent classes, so we need to use an interface. The necessary interface is built into the Java platform. It is called Runnable. We take up this important interface next. Using Threads to Give Other Tasks a Chance We will make our bouncing-ball program more responsive by running the code that moves the ball in a separate thread. NOTE Since most computers do not have multiple processors, the Java virtual machine (JVM) uses a mechanism in which each thread gets a chance to run for a little while, then activates another thread. The virtual machine generally relies on the host operating system to provide the thread scheduling package. Our next program uses two threads: one for the bouncing ball and another for the event dispatch thread that takes care of user interface events. Because each thread gets a chance to run, the main thread has the opportunity to notice when you click on the "Close" button while the ball is bouncing. It can then process the "close" action. There is a simple procedure for running code in a separate thread: place the code into the run method of a class derived from Thread. To make our bouncing-ball program into a separate thread, we need only derive a class BallThread from Thread and place the code for the animation inside the run method, as in the following code: class BallThread extends Thread { . . . public void run() { try { for (int i = 1; i <= 1000; i++) { b.move(); sleep(5); } } catch (InterruptedException exception) { } } . . . } You may have noticed that we are catching an exception called Interrupted-Exception. Methods such as sleep and wait throw this exception when your thread is interrupted because another thread has called the interrupt method. Interrupting a thread is a very drastic way of getting the thread's attention, even when it is not active. Typically, a thread is interrupted to terminate it. Accordingly, our run method exits when an InterruptedException occurs. Running and Starting Threads When you construct an object derived from Thread, the run method is not automatically called. BallThread thread = new BallThread(. . .); // won't run yet You must call the start method in your object to actually start a thread. thread.start(); CAUTION Do not call the run method directlystart will call it when the thread is set up and ready to go. Calling the run method directly merely executes its contents in the same threadno new thread is started. Beginners are sometimes misled into believing that every method of a Thread object automatically runs in a new thread. As you have seen, that is not true. The methods of any object (whether a Thread object or not) run in whatever thread they are called. A new thread is only started by the start method. That new thread then executes the run method. In the Java programming language, a thread needs to tell the other threads when it is idle, so the other threads can grab the chance to execute the code in their run procedures. (See Figure 12.) The usual way to do this is through the static sleep method. The run method of the BallThread class uses the call to sleep(5) to indicate that the thread will be idle for the next five milliseconds. After five milliseconds, it will start up again, but in the meantime, other threads have a chance to get work done. TIP There are a number of static methods in the Thread class. They all operate on the current thread, that is, the thread that executes the method. For example, the static sleep method idles the thread that is calling sleep. Figure 12: The Event Dispatch and Ball Threads The complete code is shown in Example 12. Example 12: BounceThread.java 1. import java.awt.*; 2. import java.awt.event.*; 3. import java.awt.geom.*; 4. import java.util.*; 5. import javax.swing.*; 6. 7. /** 8. Shows an animated bouncing ball running in a separate thread 9. */ 10. public class BounceThread("BounceThread"); starts a thread 74. to make it bounce 75. */ 76. public void addBall() 77. { 78. Ball b = new Ball(canvas); 79. canvas.add(b); 80. BallThread thread = new BallThread(b); 81. thread.start(); 82. } 83. 84. private BallCanvas canvas; 85. public static final int WIDTH = 450; 86. public static final int HEIGHT = 350; 87. } 88. 89. /** 90. A thread that animates a bouncing ball. 91. */ 92. class BallThread extends Thread 93. { 94. /** 95. Constructs the thread. 96. @aBall the ball to bounce 97. */ 98. public BallThread(Ball aBall) { b = aBall; } 99. 100. public void run() 101. { 102. try 103. { 104. for (int i = 1; i <= 1000; i++) 105. { 106. b.move(); 107. sleep(5); 108. } 109. } 110. catch (InterruptedException exception) 111. { 112. } 113. } 114. 115. private Ball b; 116. } 117. 118. /** 119. The canvas that draws the balls. 120. */ 121. class BallCanvas extends JPanel 122. { 123. /** 124. Add a ball to the canvas. 125. @param b the ball to add 126. */ 127. public void add(Ball b) 128. { 129. balls.add(b); 130. } 131. 132. public void paintComponent(Graphics g) 133. { 134. super.paintComponent(g); 135. Graphics2D g2 = (Graphics2D)g; 136. for (int i = 0; i < balls.size(); i++) 137. { 138. Ball b = (Ball)balls.get(i); 139. b.draw(g2); 140. } 141. } 142. 143. private ArrayList balls = new ArrayList(); 144. } 145. 146. /** 147. A ball that moves and bounces off the edges of a 148. component 149. */ 150. class Ball 151. { 152. /** 153. Constructs a ball in the upper left corner 154. @c the component in which the ball bounces 155. */ 156. public Ball(Component c) { canvas = c; } 157. 158. /** 159. Draws the ball at its current position 160. @param g2 the graphics context 161. */ 162. public void draw(Graphics2D g2) 163. { 164. g2.fill(new Ellipse2D.Double(x, y, XSIZE, YSIZE)); 165. } 166. 167. /** 168. Moves the ball to the next position, reversing direction 169. if it hits one of the edges 170. */ 171. public void move() 172. { 173. x += dx; 174. y += dy; 175. if (x < 0) 176. { 177. x = 0; 178. dx = -dx; 179. } 180. if (x + XSIZE >= canvas.getWidth()) 181. { 182. x = canvas.getWidth() - XSIZE; 183. dx = -dx; 184. } 185. if (y < 0) 186. { 187. y = 0; 188. dy = -dy; 189. } 190. if (y + YSIZE >= canvas.getHeight()) 191. { 192. y = canvas.getHeight() - YSIZE; 193. dy = -dy; 194. } 195. 196. canvas.repaint(); 197. } 198. 199. private Component canvas; 200. private static final int XSIZE = 15; 201. private static final int YSIZE = 15; 202. private int x = 0; 203. private int y = 0; 204. private int dx = 2; 205. private int dy = 2; 206. } 207. Running Multiple Threads Run the program in the preceding section. Now, click on the "Start" button again while a ball is running. Click on it a few more times. You will see a whole bunch of balls bouncing away, as captured in Figure 13. Each ball will move 1,000 times until it comes to its final resting place. Figure 13: Multiple threads This example demonstrates a great advantage of the thread architecture in the Java programming language. It is very easy to create any number of autonomous objects that appear to run in parallel. Occasionally, you may want to enumerate the currently running threadssee the API note in the "Thread Groups" section for details. The Runnable Interface We could have saved ourselves a class by having the Ball class extend the Thread class. As an added advantage of that approach, the run method has access to the private fields of the Ball class: class Ball extends Thread { public void run() { try { for (int i = 1; i <= 1000; i++) { x += dx; y += dy; . . . canvas.repaint(); sleep(5); } } catch (InterruptedException exception) { } } . . . private Component canvas; private int x = 0; private int y = 0; private int dx = 2; private int dy = 2; } Conceptually, of course, this is dubious. A ball isn't a thread, so inheritance isn't really appropriate. Nevertheless, programmers sometimes follow this approach when the run method of a thread needs to access private fields of another class. In the preceding section, we've avoided that issue altogether by having the run method call only public methods of the Ball class, but it isn't always so easy to do that. Suppose the run method needs access to private fields, but the class into which you want to put the run method already has another superclass. Then it can't extend the Thread class, but you can make the class implement the Runnable interface. As though you had derived from Thread, put the code that needs to run in the run method. For example, class Animation extends JApplet implements Runnable { . . . public void run() { // thread action goes here } } You still need to make a thread object to launch the thread. Give that thread a reference to the Runnable object in its constructor. The thread then calls the run method of that object. class Animation extends JApplet implements Runnable { . . . public void start() { runner = new Thread(this); runner.start(); } . . . private Thread runner; } In this case, the this argument to the Thread constructor specifies that the object whose run method should be called when the thread executes is an instance of the Animation object. Some people even claim that you should always follow this approach and never subclass the Thread class. That advice made sense for Java 1.0, before inner classes were invented, but it is now outdated. If the run method of a thread needs private access to another class, you can often use an inner class, like this: class Animation extends JApplet { . . . public void start() { runner = new Thread() { public void run() { // thread action goes here } }; runner.start(); } . . . private Thread runner; } A plausible use for the Runnable interface would be a thread pool in which pre-spawned threads are kept around for running. Thread pools are sometimes used in environments that execute huge numbers of threads, to reduce the cost of creating and garbage collecting thread objects.
http://www.informit.com/articles/article.aspx?p=26326&amp;seqNum=6
CC-MAIN-2017-34
refinedweb
3,313
75.71
“Learn the rules like a pro, so you can break them like an artist.” Pablo Picasso “Learn the rules, and then forget them.” Haiku Master Matsuo Basho Alright, that’s enough of the “preface” material, let’s get on with the book! As I wrote earlier, I want to spare you the route I took of, “You Have to Learn Haskell to Learn Scala/FP,” but, I need to say that I did learn a valuable lesson by taking that route: It’s extremely helpful to completely forget about several pieces of the Scala programming language as you learn FP in Scala. Assuming that you come from an “imperative” and OOP background as I did, your attempts to learn Scala/FP will be hindered because it is possible to write both imperative code and FP code in Scala. Because you can write in both styles, what happens is that when things in FP start to get more difficult, it’s easy for an OOP developer to turn back to what they already know, rather than to try to navigate the “FP Learning Cliff.” To learn Scala/FP the best thing you can do is forget that the imperative options even exist. I promise you, Scout’s Honor, this will accelerate your Scala/FP learning process. Therefore, to help accelerate your understanding of how to write FP code in Scala, this book uses only the following subset of the Scala programming language. The rules To accelerate your Scala/FP learning process, this book uses the following programming “rules”: - There will be no nullvalues in this book. We’ll intentionally forget that there is even a nullkeyword in Scala. - Only pure functions will be used in this book. I’ll define pure functions more thoroughly soon, but simply stated, (a) a pure function must always return the same output given the same input, and (b) calling the function must not have any side effects, including reading input, writing output, or modifying any sort of hidden state. - This book will only use immutable values ( val) for all fields. There are no varfields in pure FP code, so I won’t use any variables ( var) in this book, unless I’m trying to explain a point. - Whenever you use an if, you must always also use an else. Functional programming uses only expressions, not statements. - We won’t create “classes” that encapsulate data and behavior. Instead we’ll create data structures and write pure functions that operate on those data structures. The rules are for your benefit (really) These rules are inspired by what I learned from working with Haskell. In Haskell the only way you can possibly write code is by writing pure functions and using immutable values, and when those really are your only choices, your brain quits fighting the system. Instead of going back to things you’re already comfortable with, you think, “Hmm, somehow other people have solved this problem using only immutable values. How can I solve this problem using pure FP?” When your thinking gets to that point, your understanding of FP will rapidly progress. If you’re new to FP those rules may feel limiting — and you may be wondering how you can possibly get anything done — but if you follow these rules you’ll find that they lead you to a different way of thinking about programming problems. Because of these rules your mind will naturally gravitate towards FP solutions to problems. For instance, because you can’t use a var field to initialize a mutable variable before a for loop, your mind will naturally think, “Hmm, what can I do here? Ah, yes, I can use recursion, or maybe a built-in collections method to solve this problem.” By contrast, if you let yourself reach for that var field, you’ll never come to this other way of thinking. Not a rule, but a note: using ??? While I’m writing about what aspects of the Scala language I won’t use in this book, it’s also worth noting that I will often use the Scala ??? syntax when I first sketch a function’s signature. For example, when I first start writing a function named createWorldPeace, I’ll start to sketch the signature like this: def createWorldPeace = ??? I mention this because if you haven’t seen this syntax before you may wonder why I’m using it. The reason I use it is because it’s perfectly legal Scala code; that line of code will compile just fine. Go ahead and paste that code into the REPL and you’ll see that it compiles just like this: scala> def createWorldPeace = ??? createWorldPeace: Nothing However, while that code does compile, you’ll see a long error message that begins like this if you try to call the createWorldPeace function: scala.NotImplementedError: an implementation is missing I wrote about the ??? syntax in a blog post titled, What does ‘???’ mean in Scala?, but in short, Martin Odersky, creator of the Scala language, added it to Scala for teaching cases just like this. The ??? syntax just means, “The definition of this function is TBD.” Summary In summary, the rules we’ll follow in this book are: - There will be no nullvalues. - Only pure functions will be used. - Immutable values will be used for all fields. - Whenever you use an if, you must always also use an else. - We won’t create “classes” that encapsulate data and behavior. What’s next Given these rules, let’s jump into a formal definition of “functional programming.”
https://alvinalexander.com/scala/fp-book/programming-rules-learning-fp-in-scala/
CC-MAIN-2021-49
refinedweb
923
69.62
Richard Stallman wrote: Does anyone know what is the "correct" thing to do in this case according to IEEE? Would returning NaN be correct? The functiona ffloor, fceiling, ftruncate and fround all return (I believe correctly) NaN when passed NaN. They also return +INF or -INF when passed those values. The trouble with floor, ceiling, truncate and round seems to be that these functions are supposed to return an integer, and NaN and the infinities are floats, not integers. So the problem is not what the rounded value of NaN should be, that would clearly seem to be NaN, but whether it makes sense to cast NaN to an integer. I do not know what the IEEE floating point standard says about casting these values to integers. It is not guaranteed to say anything about it, since it is a floating point standard. The rounding functions in the GNU C library that return a floating point number all behave like ffloor et al. The rounding functions that return integers (lrint and lround) seem to return nonsense values when fed NAN or infinities. So does trying to cast NAN or an infinity to int. floor, ceiling, truncate and round all produce an error when passed infinity, I guess because the logical rounded value returned by ffloor et al is not an integer. Unless the IEEE standard says something else, it would seem consistent to give an error message when NaN is passed since the logical rounded value (NaN) returned by ffloor et al is not an integer either. It seems like somebody who expects NaN to be rounded to Nan would call ffloor or similar, because he would not be insisting on an integer result.
https://lists.gnu.org/archive/html/emacs-devel/2002-02/msg00360.html
CC-MAIN-2015-11
refinedweb
284
67.79
1 The Building of Caveman - part 2. What to build, and tool selection Posted by Norman Barrows, 29 March 2014 · 1,149 views What to build, and how? What to build? thats the first of a million questions to be answered when building a game. Well, the answer turns out to be pretty simple: you build what you want to play, that somebody else hasn't already built for you. odds are, if you think its cool enough to play that you'd go through the effort of building it just to play it, then maybe other people might like to play it as well. in my case, it was pretty easy. i'd already made over a dozen games over the years. some hits, some ok, some that didn't sell much at all. so you go with what works. the line of thought was: "well, my biggest hit was probably Caveman, so i probbaly ought to do that first." So that settled what to build: you lead with your strong suit, your flagship product. Now, how to build it: Platform: PC. bigger installed base than mac and linux. insufficient manpower (i'm an army of one) to support multiple platforms, so PC is it. moble etc isnt even an option ue to the size of the game - at least at first - pc alone is plenty of a challenge. Language(s) and libraries: WHATEVER: 1. HAS THE CAPABILITIES REQUIRED 2. RUNS FAST ENOUGH and 3. WILL REQUIRE THE LEAST LEARNING AND DEVELOPMENT TIME in my case, that was procedural ADT C++ code (look it up!), and DX9 fixed function. i'd been coding C/C++ games since before OO syntax, and didn't really need the OO language extenstions, thus the procedural ADTs. And Caveman has a paleolithic setting with no magic or fantasy elements, so not much in the way of special effects, smoke, flames, clouds, thats about it. so all i needed was aniso-mipmapped textured meshes, with a little alpha test and alpha blend. I do use a 2 stage textre blend for snow on the ground but thats it. i was already familair with dx8, and hadn't learned shaders yet - i hadn't needed them. still haven't really learned them or needed them yet in fact. truth is, i'm a little scared to get into shaders for fear i'll like it too much. the mere concept of such power to "party on the bitmap" is... intoxicating <g>. At this point i should mention that Caveman was developed on a budget of $0. one baseline $400 pc (no graphics card), and internet access, thats all you get to work with. no money for tools, content, middleware, etc. Compiler: ms visual studio of course. i started with basic, then pascal, then watcom C, all in the quest for speed. sometime in the mid 90's microsoft got a clue and added the compiler optimizations required for games to the free version of MC C++. And directx tends to work better with ms c++ than with some other compilers. 3d modeler: truespace rosetta 7.61 beta 1. its free 2. used to be the next best thing after 3dsmax 3. full directx .x save capabilities built in. no conversion / import / export headaches. 2d paint programs: paint.net and "free clone stamp tool". both are free. paint.net has the basic capabilities of photoshop. clone stamp takes up the slack. a really awesome little texture painting program. Audio tools: TBD (to be determined) i have xaudio 2 up and runnning but so far all it does is play "time to relax", track 1 from the album "smash" (as i recall) by the offspring as a test. it was the first wav file i came across on my hard drive. the music for the game is pretty simple, tom toms and flute for atmosphere, funky beats reampped to jungle drum kits for combat. and i have the general 6000 series 50 cd sound effects library from Sound Ideas for foley sfx. but i gues i'll have to build another bow. the sound of a primitive bow firing was the only effect i had to make myself for the original version. to do it, i made a 7 foot long bow, with fishing line for the bowstring, and relatively straight wood sticks for arrows. having paid $1000 for the sound effects library, i was bound and determined to get my money's worth out of it. so i used the box as the mike stand while i recorded the sound of me firing the bow down the hallway in the house! <g>. up next: step one, getting a handle on directx graphics. part 3: part 1: additional note: the main key to tool selection is interoperability. ie will they all work together? do they all support the same file formats? caveman uses .x files for meshes, .bmp files for textures, and wav files are planned for sound. original meshes are stored in .RsScn truespace 7.6 scene files. obviously microsoft visual studio and directx work together. directx and and truespace both support .x files. and paint.net, directx and free clone stamp all support bmps. likewise, xaudio2 and whatever audio tool i find will both support wav files.
http://www.gamedev.net/blog/1730/entry-2259488-the-building-of-caveman-part-2-what-to-build-and-tool-selection/
CC-MAIN-2016-50
refinedweb
882
84.37
Multit. The architect must be aware of security, access control, etc. Multitenancy can exist in several different flavors: Multitenancy in Deployment - Fully isolated business logic (dedicated server customized business process) - Virtualized Application Servers (dedicated application server, single VM per app server) - Shared virtual servers (dedicated application server on shared VM) - Shared application servers (threads and sessions) This spectrum of different installations can be seen here: Multitenancy and Data - Dedicated physical server (DB resides in isolated physical hosts) - Shard virtualized host (separate DBs on virtual machines) - Database on shared host (separate DB on same physical host) - Dedicated schema within shared databases (same DB, dedicated schema/table) - Shared tables (same DB and schema, segregated by keys – rows) Before jumping into the APIs, it is important to understand how Google’s internal data storage solution work. Introducing Google’s BigTable technology: It is a storage solution for Google’s own applications such as Search, Google Analytics, gMail, AppEngine, etc BigTable is NOT: - A database - A horizontally sharded data - A distributed hash table It IS: a sparse, distributed, persistent multidimensional sorted map. In basic terms, it is a hash of hashes (map of maps, or a dict of dicts). AppEngine data is in one “table” distributed across multiple computers. Every entity has a Key by which it is uniquely identified (Parent + Child + ID), but there is also metadata that tells which GAE application (appId) an Entity belongs to. From the graph above, BigTable distributes its data in a format called tablets, which are basically slices of the data. These tablets live on different servers in the cloud. To index into a specific record (record and entity mean pretty much the same thing) you use a 64KB string, called a Key. This key has information about the specific row and column value you want to read from. It also contains a timestamp to allow for multiple versions of your data to be stored. In addition, records for a specific entity group are located contiguously. This facilitates scanning for records. Now we can dive into how Google implements Multitenancy. Implemented in release 1.3.6 of App Engine, the Namespace API (see resources) is designed to be very customizable, with hooks into your code that you can control, so you can set up multi-tenancy tailored to your application’s needs. The API works with all of the relevant App Engine APIs (Datastore, Memcache, Blobstore, and Task Queues). In GAE terms, namespace == tenant At the storage level of datastore, a namespace is just like an app-id. Each namespace essentially looks to the datastore as another view into the application’s data. Hence, queries cannot span namespaces (at least for now) and key ranges are different per namespace. Once an entity is created, it’s namespace does not change, so doing a namespace_manager.set(…) will have no effect on its key. Similarly, once a query is created, its namespace is set. Same with memcache_service() and all other GAE APIS. Hence it’s important to know which objects have which namespaces. In my mind, since all of GAE user’s data lives in BigTable, it helps to visualize a GAE Key object as: Application ID | Ancestor Keys | Kind Name | Key Name or ID All these values provide an address to locate your application’s data. Similarly, you can imagine the multitenant key as: Application ID | Namespace| Ancestor Keys | Kind Name | Key Name or ID Now let’s briefly discuss the API (Python): Here is a quick example: The important thing to notice here is the pattern that GAE provides. It will the exact same thing for the Java APIs. The finally block is immensely important as it restores the namespace to what is was originally (before the request). Omitting the finally block will cause the namespace to be set for the duration of the request. That means that any API access whether it is datastore queries or Memcache retrieval will use the namespace previously set. Furthermore, to query for all the namespaces created, GAE provides some meta queries, as such: Resources: - - BigTable. - - - - Reference: Multitenancy in Google AppEngine (GAE) from our JCG partner Luis Atencio at the Reflective Thought blog. Related Articles :
https://www.javacodegeeks.com/2011/12/multitenancy-in-google-appengine-gae.html
CC-MAIN-2017-22
refinedweb
694
50.67
-- GENERATED by C->Haskell Compiler, version 0.16.3 Crystal Seed, 24 Jan 2009 (Haskell) -- Edit the ORIGNAL .chs file instead! {-# LINE 1 "src/HsShellScript/Commands.chs" #-}-- #hide module HsShellScript.Commands where import Prelude hiding (catch) import Control.Exception import Data.Bits -- import Directory import Foreign.C import Foreign.C.Error import Foreign.Ptr import GHC.IO import GHC.IO.Exception -- InvalidArgument, UnsupportedOperation import HsShellScript.Misc import HsShellScript.Misc import HsShellScript.Paths import HsShellScript.ProcErr import HsShellScript.Shell import System.IO.Error hiding (catch) import Data.List import Data.Maybe import Control.Monad import Text.ParserCombinators.Parsec as Parsec import System.Posix hiding (rename, createDirectory, removeDirectory) import System.Posix.Env import System.Random import System.Directory -- | -- Do a call to the @realpath(3)@ system library function. This makes the path absolute, normalizes it and expands all symbolic links. In case of an -- error, an @IOError@ is thrown. realpath :: String -- ^ path -> IO String -- ^ noramlized, absolute path, with symbolic links expanded realpath path = withCString path $ \cpath -> do res <- hsshellscript_get_realpath cpath if res == nullPtr then throwErrno' "realpath" Nothing (Just path) else peekCString res -- | Determine the target of a symbolic link. This uses the @readlink(2)@ system call. The result is a path which is either absolute, or relative to -- the directory which the symlink is in. In case of an error, an @IOError@ is thrown. The path is included and can be accessed with -- @IO.ioeGetFileName@. Note that, if the path to the symlink ends with a slash, this path denotes the directory pointed to, /not/ the symlink. In -- this case the call to will fail because of \"Invalid argument\". readlink :: String -- ^ Path of the symbolic link -> IO String -- ^ The link target - where the symbolic link points to readlink path = withCString path $ \cpath -> do res <- hsshellscript_get_readlink cpath if res == nullPtr then throwErrno' "readlink" Nothing (Just path) else peekCString res -- | Determine the target of a symbolic link. This uses the @readlink(2)@ system call. The target is converted, such that it is relative to the -- current working directory, if it isn't absolute. Note that, if the path to the symlink ends with a slash, this path denotes the directory pointed -- to, /not/ the symlink. In this case the call to @readlink@ will fail with an @IOError@ because of \"Invalid argument\". In case of any error, a -- proper @IOError@ is thrown. readlink' :: String -- ^ path of the symbolic link -> IO String -- ^ target; where the symbolic link points to readlink' symlink = do target <- readlink symlink return (absolute_path' target (fst (split_path symlink))) -- | Determine whether a path is a symbolic link. The result for a dangling symlink is @True@. The path must exist in the file system. In case of an -- error, a proper @IOError@ is thrown. is_symlink :: String -- ^ path -> IO Bool -- ^ Whether the path is a symbolic link. is_symlink path = do fill_in_location "is_symlink" $ readlink path return True `catch` (\(ioe::IOError) -> if (ioeGetErrorType ioe == InvalidArgument) then return False else ioError ioe) -- | Return the normalised, absolute version of a specified path. The path is made absolute with the current working directory, and is syntactically -- normalised afterwards. This is the same as what the @realpath@ program reports with the @-s@ option. It's almost the same as what it reports when -- called from a shell. The difference lies in the shell's idea of the current working directory. See 'cd' for details. -- -- See 'cd', 'normalise_path'. realpath_s :: String -- ^ path -> IO String -- ^ noramlized, absolute path, with symbolic links not expanded realpath_s pfad = do cwd <- getCurrentDirectory return (normalise_path (absolute_path_by cwd pfad)) -- | -- Make a symbolic link. This is the @symlink(2)@ function. Any error results in an @IOError@ thrown. The path of the intended symlink is included in -- the @IOError@ and -- can be accessed with @ioeGetFileName@ from the Haskell standard library @IO@. symlink :: String -- ^ contents of the symlink (/from/) -> String -- ^ path of the symlink (/to/) -> IO () symlink oldpath newpath = do o <- newCString oldpath n <- newCString newpath res <- foreign_symlink o n when (res == -1) $ throwErrno' ("symlink " ++ shell_quote oldpath ++ " to " ++ shell_quote newpath) Nothing (Just newpath) -- | -- Call the @du@ program. See du(1). du :: (Integral int, Read int, Show int) => int -- ^ block size, this is the @--block-size@ option. -> String -- ^ path of the file or directory to determine the size of -> IO int -- ^ size in blocks du block_gr pfad = let par = ["--summarize", "--block-size=" ++ show block_gr, pfad] parsen ausg = case reads ausg of [(groesse, _)] -> return groesse _ -> errm ("Can't parse the output of the \"du\" program: \n" ++ quote ausg ++ "\nShell command: " ++ shell_command "du" par) >> fail ("Parse error: " ++ ausg) in pipe_from (exec "/usr/bin/du" par) >>= parsen -- | -- Create directory. This is a shorthand to @System.Directory.createDirectory@ from the Haskell standard -- library. In case of an error, the path is included in the @IOError@, which GHC's implementation neglects to do. mkdir :: String -- ^ path -> IO () mkdir path = createDirectory path `catch` (\(ioe::IOError) -> ioError (ioe { ioe_filename = Just path })) -- | -- Remove directory. This is -- @Directory.removeDirectory@ from the Haskell standard -- library. In case of an error, the path is included in the @IOError@, which GHC's implementation neglects to do. rmdir :: String -- ^ path -> IO () rmdir path = removeDirectory path `catch` (\(ioe::IOError) -> ioError (ioe { ioe_filename = Just path })) -- | Remove file. This is @Directory.removeFile@ from the Haskell standard library, which is a direct frontend to the @unlink(2)@ system call in GHC. rm :: String -- ^ path -> IO () rm = removeFile {- | Change directory. This is an alias for @Directory.setCurrentDirectory@ from the Haskell standard library. In case of an error, the path is included in the @IOError@, which GHC's implementation neglects to do. Note that this command is subtly different from the shell's @cd@ command. It changes the process' working directory. This is always a realpath. Symlinks are expanded. The shell, on the other hand, keeps track of the current working directory separately, in a different way: symlinks are /not/ expanded. The shell's idea of the working directory is different from the working directory which a process has. This means that the same sequence of @cd@ commands, when done in a real shell script, will lead into the same directory. But the working directory as reported by the shell's @pwd@ command may differ from the corresponding one, reported by @getCurrentDirectory@. (When talking about the \"shell\", I'm talking about bash, regardless of whether started as @\/bin\/bash@ or in compatibility mode, as @\/bin\/sh@. I presume it's the standard behavior for the POSIX standard shell.) See 'pwd'. -} cd :: String -- ^ path -> IO () cd path = setCurrentDirectory path `catch` (\(ioe::IOError) -> ioError (ioe { ioe_filename = Just path })) -- | -- Get program start working directory. This is the @PWD@ environent -- variable, which is kept by the shell (bash, at least). It records the -- directory path in which the program has been started. Symbolic links in -- this path aren't expanded. In this way, it differs from -- @getCurrentDirectory@ from the Haskell standard library. pwd :: IO String pwd = fmap (fromMaybe "") (System.Posix.Env.getEnv "PWD") {- | Execute @\/bin\/chmod@ >chmod = run "/bin/chmod" -} chmod :: [String] -- ^ Command line arguments -> IO () chmod = run "/bin/chmod" {- | Execute @\/bin\/chown@ >chown = run "/bin/chown" -} chown :: [String] -- ^ Command line arguments -> IO () chown = run "/bin/chown" -- | -- Execute the cp program cp :: String -- ^ source -> String -- ^ destination -> IO () cp from to = run "cp" [from, to] -- | -- Execute the mv program mv :: String -- ^ source -> String -- ^ destination -> IO () mv from to = run "mv" ["--", from, to] number :: Parser Int number = do sgn <- ( (char '-' >> return (-1)) <|> return 1 ) ds <- many1 digit return (sgn * read ds) <?> "number" -- Parser for the output of the "mt status" command. parse_mt_status :: Parser ( Int -- file number , Int -- block number ) parse_mt_status = do (fn,bn) <- parse_mt_status' (Nothing, Nothing) return (fromJust fn, fromJust bn) where try = Parsec.try parse_mt_status' :: (Maybe Int, Maybe Int) -> Parser (Maybe Int, Maybe Int) parse_mt_status' st = do st' <- parse_mt_status1' st ( parse_mt_status' st' <|> return st' ) parse_mt_status1' :: (Maybe Int, Maybe Int) -> Parser (Maybe Int, Maybe Int) parse_mt_status1' st@(fn,bn) = try (do string "file number = " nr <- number newline return (Just nr, bn) ) <|> try (do string "block number = " nr <- number newline return (fn, Just nr) ) <|> (manyTill anyChar newline >> return st) -- | -- Run the command @mt status@ for querying the tape drive status, and -- parse its output. mt_status :: IO (Int, Int) -- ^ file and block number mt_status = do out <- pipe_from (exec "/bin/mt" ["status"]) case (parse parse_mt_status "" out) of Left err -> ioError (userError ("parse error at " ++ show err)) Right x -> return x -- | -- The @rename(2)@ system call to rename and\/or move a file. The @renameFile@ action from the Haskell standard library doesn\'t do it, because -- the two paths may not refer to directories. Failure results in an @IOError@ thrown. The /new/ path is included in -- the @IOError@ and -- can be accessed with @IO.ioeGetFileName@. rename :: String -- ^ Old path -> String -- ^ New path -> IO () rename oldpath newpath = do withCString oldpath $ \coldpath -> withCString newpath $ \cnewpath -> do res <- foreign_rename coldpath cnewpath when (res == -1) $ throwErrno' ("rename " ++ shell_quote oldpath ++ " to " ++ shell_quote newpath) Nothing (Just newpath) -- | -- Rename a file. This first tries 'rename', which is most efficient. If it fails, because source and target path point to different file systems -- (as indicated by the @errno@ value @EXDEV@), then @\/bin\/mv@ is called. -- -- See 'rename', 'mv'. rename_mv :: FilePath -- ^ Old path -> FilePath -- ^ New path -> IO () rename_mv old new = HsShellScript.Commands.rename old new `catch` (\(ioe::IOError) -> if ioeGetErrorType ioe == UnsupportedOperation then do errno <- getErrno -- Foreign.C.Error.errnoToIOError matches many errno values to UnsupportedOperation. In order to determine -- if it is the right one, the errno is taken again. This relies on no system calls in between. if (errno == eXDEV) then run "/bin/mv" ["--", old, new] else ioError ioe else ioError ioe ) {- | Rename a file or directory, and cope with read only issues. This renames a file or directory, using @rename@, sets the necessary write permissions beforehand, and restores them afterwards. This is more efficient than @force_mv@, because no external program needs to be called, but it can rename files only inside the same file system. See @force_cmd@ for a detailed description. The new path may be an existing directory. In this case, it is assumed that the old file is to be moved into this directory (like with @mv@). The new path is then completed with the file name component of the old path. You won't get an \"already exists\" error. >force_rename = force_cmd rename See 'force_cmd', 'rename'. -} force_rename :: String -- ^ Old path -> String -- ^ New path -> IO () force_rename = force_cmd HsShellScript.Commands.rename {- | Move a file or directory, and cope with read only issues. This moves a file or directory, using the external command @mv@, sets the necessary write permissions beforehand, and restores them afterwards. This is less efficient than @force_rename@, because the external program @mv@ needs to be called, but it can move files between file systems. See @force_cmd@ for a detailed description. >force_mv src tgt = fill_in_location "force_mv" $ force_cmd (\src tgt -> run "/bin/mv" ["--", src, tgt]) src tgt See 'force_cmd', 'force_mv'. -} force_mv :: String -- ^ Old path -> String -- ^ New path or target directory -> IO () force_mv src tgt = fill_in_location "force_mv" $ force_cmd (\src tgt -> run "/bin/mv" ["--", src, tgt]) src tgt {- | Rename a file with 'rename', or when necessary with 'mv', and cope with read only issues. The necessary write permissions are set, then the file is renamed, then the permissions are restored. First, the 'rename' system call is tried, which is most efficient. If it fails, because source and target path point to different file systems (as indicated by the @errno@ value @EXDEV@), then @\/bin\/mv@ is called. >force_rename_mv old new = fill_in_location "force_rename_mv" $ force_cmd rename_mv old new See 'rename_mv', 'rename', 'mv', 'force_cmd'. -} force_rename_mv :: FilePath -- ^ Old path -> FilePath -- ^ New path -> IO () force_rename_mv old new = fill_in_location "force_rename_mv" $ force_cmd rename_mv old new {- | Call a command which moves a file or directory, and cope with read only issues. This function is for calling a command, which renames files. Beforehand, write permissions are set in order to enable the operation, and afterwards the permissions are restored. The command is meant to be something like @rename@ or @run \"\/bin\/mv\"@. In order to change the name of a file or dirctory, but leave it in the super directory it is in, the super directory must be writeable. In order to move a file or directory to a different super directory, both super directories and the file\/directory to be moved must be writeable. I don't know what this behaviour is supposed to be good for. This function copes with the case that the file\/directory to be moved or renamed, or the super directories are read only. It makes the necessary places writeable, calls the command, and makes them read only again, if they were before. The user needs the necessary permissions for changing the corresponding write permissions. If an error occurs (such as file not found, or insufficient permissions), then the write permissions are restored to the state before, before the exception is passed through to the caller. The command must take two arguments, the old path and the new path. It is expected to create the new path in the file system, such that the correct write permissions of the new path can be set by @force_cmd@ after executing it. The new path may be an existing directory. In this case, it is assumed that the old file is to be moved into this directory (like with @mv@). The new path is completed with the file name component of the old path, before it is passed to the command, such that the command is supplied the complete new path. Examples: >force_cmd rename from to >force_cmd (\from to -> run "/bin/mv" ["-i", "-v", "--", from, to]) from to See 'force_rename', 'force_mv', 'rename'. -} force_cmd :: (String -> String -> IO ()) -- ^ Command to execute after preparing the permissions -> String -- ^ Old path -> String -- ^ New path or target directory -> IO () force_cmd cmd oldpath newpath0 = do isdir <- is_dir newpath0 let newpath = if isdir then newpath0 ++ "/" ++ snd (split_path oldpath) else newpath0 old_abs <- absolute_path oldpath new_abs <- absolute_path newpath let (olddir, _) = split_path old_abs (newdir, _) = split_path new_abs if olddir == newdir then -- Don't need to make the file/directory writeable. force_writeable olddir (cmd oldpath newpath) else -- Need to make both the file/dirctory and both super directories writeable. let cmd' = do res <- cmd oldpath newpath return (newpath, res) in force_writeable olddir (force_writeable newdir (force_writeable2 oldpath cmd')) `catch` (\(ioe::IOError) -> ioError (if ioe_location ioe == "" || ioe_location ioe == "force_writeable" then ioe { ioe_location = "force_cmd" } else ioe)) {- | Make a file or directory writeable for the user, perform an action, and restore its writeable status. An IOError is raised when the user doesn't have permission to make the file or directory writeable. >force_writeable path io = force_writeable2 path (io >>= \res -> return (path, res)) Example: >-- Need to create a new directory in /foo/bar, even if that's write protected >force_writeable "/foo/bar" $ mkdir "/foo/bar/baz" See 'force_cmd', 'force_writeable2'. -} force_writeable :: String -- ^ File or directory to make writeable -> IO a -- ^ Action to perform -> IO a -- ^ Returns the return value of the action force_writeable path io = add_location "force_writeable" $ force_writeable2 path (io >>= \res -> return (path, res)) {- | Make a file or directory writeable for the user, perform an action, and restore its writeable status. The action may change the name of the file or directory. Therefore it returns the new name, along with another return value, which is passed to the caller. The writeable status is only changed back if it has been changed by @force_writeable2@ before. An IOError is raised when the user doesn'h have permission to make the file or directory writeable, or when the new path doesn't exist. See 'force_cmd', 'force_writeable'. -} force_writeable2 :: String -- ^ File or directory to make writeable -> IO (String, a) -- ^ Action to perform -> IO a force_writeable2 path_before io = add_location "force_writeable2" $ do writeable <- fileAccess' path_before False True False when (not writeable) $ set_user_writeable path_before (path_after, res) <- catch io (\(e::SomeException) -> do when (not writeable) $ catch (set_user_readonly path_before) ignore -- Don't let failure to restore the status make us loose the actual exception throwIO e ) when (not writeable) $ set_user_readonly path_after return res where ignore :: SomeException -> IO () ignore _ = return () set_user_writeable path = do filemode <- fmap fileMode (getFileStatus' path) fill_in_filename path $ setFileMode' path (filemode .|. ownerWriteMode) set_user_readonly path = do filemode <- fmap fileMode (getFileStatus' path) fill_in_filename path $ setFileMode' path (filemode .&. (complement ownerWriteMode)) -- | -- Call the @fdupes@ program in order to find identical files. It outputs a -- list of groups of file names, such that the files in each group are -- identical. Each of these groups is further analysed by the @fdupes@ -- action. It is split to a list of lists of paths, such that each list -- of paths corresponds to one of the directories which have been searched -- by the @fdupes@ program. If you just want groups of identical files, then apply @map concat@ to the result. -- -- /The/ @fdupes@ /program doesn\'t handle multiple occurences of the same directory, or in recursive mode one specified directory containing another, -- properly. The same file may get reported multiple times, and identical files may not get reported./ -- -- The paths are normalised (using 'normalise_path'). fdupes :: [String] -- ^ Options for the fdupes program -> [String] -- ^ Directories with files to compare -> IO [[[String]]] -- ^ For each set of identical files, and each of the specified directories, the paths of the identical files in this -- directory. fdupes opts paths = do let paths' = map normalise_path paths paths'' = map (++"/") paths' out <- fmap lines $ pipe_from (run "/usr/bin/fdupes" (opts ++ ["--"] ++ paths')) let grps = groups out return (map (sortgrp paths'') grps) where groups [] = [] groups l = let l' = dropWhile (== "") l (g,rest) = span (/= "") l' in if g == [] then [] else (g : groups rest) split p [] = ([], []) split p (x:xs) = let (yes, no) = split p xs in if p x then (x:yes, no) else (yes, x:no) -- result: ( <paths within the directory>, <rest of paths> ) path1 grp dir = split (isPrefixOf dir) grp -- super directories -> Group of identical files -> list of lists of files in each directory sortgrp dirs [] = map (const []) dirs sortgrp [] grp = error ("Bug: found paths which don't belong to any of the directories:\n" ++ show grp) sortgrp (dir:dirs) grp = let (paths1, grp_rest) = path1 grp dir in (paths1 : sortgrp dirs grp_rest) replace_location :: String -> String -> IO a -> IO a replace_location was wodurch io = catch io (\(ioe::IOError) -> if ioe_location ioe == was then ioError (ioe { ioe_location = wodurch }) else ioError ioe ) foreign import ccall safe "HsShellScript/Commands.chs.h hsshellscript_get_realpath" hsshellscript_get_realpath :: ((Ptr CChar) -> (IO (Ptr CChar))) foreign import ccall safe "HsShellScript/Commands.chs.h hsshellscript_get_readlink" hsshellscript_get_readlink :: ((Ptr CChar) -> (IO (Ptr CChar))) foreign import ccall safe "HsShellScript/Commands.chs.h symlink" foreign_symlink :: ((Ptr CChar) -> ((Ptr CChar) -> (IO CInt))) foreign import ccall safe "HsShellScript/Commands.chs.h rename" foreign_rename :: ((Ptr CChar) -> ((Ptr CChar) -> (IO CInt)))
http://hackage.haskell.org/package/hsshellscript-3.3.0/docs/src/HsShellScript-Commands.html
CC-MAIN-2014-35
refinedweb
3,049
62.78
This demo explores many of 2013s new features, including: LabVIEW Bookmark Manager Attachable Comments New Excel Integration (Optional) Mouse Wheel Support for Controls Event-Based Programming Improvements o Static Event for Mouse Wheel Interaction o Event Inspector Window o High Priority User Events Simplified Web Service Experience o Project Item o Debugging o Deploy with Executable DEMO REQUIREMENTS LabVIEW 2013 Application Builder NOTE: If Package Manager claims it cannot access LabVIEW over VI Server, open LabVIEW and Tools >> Options drop-down menu. Look at the VI Server category, and ensure both Machine Access and Exported VIs subcategories have * items. This will be absolutely sure that Package Manager may access LabVIEW. 6. Under Demonstrations, choose Whats New in LabVIEW 2013 and click next then finish through the remaining dialogs. NOTE: Once the VI Package file has been installed. You only need to begin this presentation and demo from the create project dialog to start with the same files each time. All necessary files will be accessible from the project created through this dialog. STEP-BY-STEP INSTRUCTIONS 1. DO: PRESENT THE WHATS NEW SLIDE DECK THROUGH THE DATA DASHBOARD 2.2 SLIDE. 2. Explain: Each year, NI strives to incorporate user feedback when determining the features well add to LabVIEW. Now that weve discussed some of the new hardware for 2013, Id like to jump over to LabVIEW and show you some of the new features weve implemented to help you be more productive. 3. DO: Ensure the Whats New in 2013 project is open. 4. DO: Open Main.VI and Run It 5. Explain: Anyone who has ever inherited and had to interpret existing LabVIEW code knows the value of good documentation. 6. DO: Open the Bookmark example VI by clicking the Bookmark Example button on the Main.vi front panel. 7. Explain: Lets imagine we inherited this particular application, based on a LabVIEW sample project, and were tasked with adding additional data logging functionality. Where to start? 8. DO: Point out the Logging Message Loop VI, open it, and switch to its block diagram. 9. Explain: For many applications, our code and VI hierarchies quickly become so complex we cannot identify, at a glance, where specific functionality resides. In this case, where I want additional data logging functionality, I could continue to parse through this code, ultimately determining where modifications were necessary, but what if there was a better way? In LabVIEW 2013, there is, and its called bookmarks. 10. DO: Click the View dropdown and select Bookmark manager. 11. Explain: In 2013, I can turn any comment into a bookmark by simply beginning the comment with a hash tag (#). This prompts LabVIEW to recognize that comment as a bookmark, and aggregate it into this bookmark manager window. 13. Explain: Notice Ive left a to -do bookmark for, perhaps, another developer, indicating I want this parameter change made. 14. DO: Double-click on the #1-Bookmark bookmark. 15. Explain: By simply double-clicking this bookmark, I am able to navigate them directly to the location in my application I wished for them to view and modify. In LabVIEW 2013, all sample projects will make use of the bookmarks and this feature will help single and multi developer efforts alike by simplifying the process of code navigation and note aggregation. NI considers this to be a best practice for code documentation moving forward. NOTE: You may not navigate to Bookmarks contained in running LabVIEW code. In this demo, the Main VI is running, but the sample project code, used to show Bookmarks, is not running to facilitate this demo. 16. Explain: Another feature weve implemented, to complement the addition of bookmarks, is the ability to attach comments on the block diagram. In past, LabVIEW developers used free labels to describe algorithms, leave placeholder notes, and generally put their thoughts onto block diagrams. But, aside from decorations, there has never been a way to create a direct and visual relationship between a comment and the code it describes. LabVIEW 2013 changes this by allowing us to attach our comments to any structure, function, or constant on the block diagram. 17. Explain: To attach this comment to any function, structure, or constant on my block diagram, I simply move my mouse to the bottom right, click the arrow icon that appears and click again on the entity on my diagram I care about. 18. DO: Move mouse to bottom-right of #1-Bookmark comment, look for yellow boxed arrow to appear, click and move mouse to show the carried arrow. Move mouse to the border of 19. Explain: Now that Ive attached this comment, I may move the comment wherever I like and still maintain a visual relationship with this control Ive chosen. 20. DO: Click and move the comment around to show this relationship. 21. Explain: Another great attribute of attached comments is their persistence through block diagram cleanup. After attaching, LabVIEWs block diagram cleanup utility will now take this attachment into account, and attempt to keep the comment close to the item its attached to. No longer will documentation be sent adrift when using this convenient utility. 22. DO: Close the Log Data VI, Boo kmark Manager Window, and Bookmark Example VI. Save no changes. You should now be back to the front panel of Main.VI 23. Explain: Well use this project to show you some additional new productivity features of LabVIEW 2013. This project is designed to simulate a temperature alarming application, and is based on the common producer/consumer, event-based architecture. OPTIONAL FEATURE DEMO 24. Do: Double-click Bookmark #3-XLSX_Integration to open the Log Data VI. 26. Explain: In LabVIEW 2013, you may now write data directly to an Excel file type from the Write to Measurement File Express VI. So, if all I need to do is take my data and export it, I have a quick way to do this using LabVIEW 2013, without needing to use the .lvm text file format. NOTE: This functionality is WRITE-ONLY. We cannot read this file back using the Read From Measurement File Express VI. Also, this functionality makes use of the Open Office file format. So, once the resultant .xlsx file has been opened and saved in Microsoft Excel, the NI Report Generation Toolkit for Microsoft Office would then be necessary to use this file in LabVIEW. Moreover, no extensive formatting options are available, so the Report Generation Toolkit is still recommended for any report creation in LabVIEW. 27. Explain: Each year, we try to incorporate our users feedback, and a highly requested feature has been native mouse wheel support for front panels. Weve added this support in 2013, allowing simplified programming of mouse wheel interaction with LabVIEW UIs. 28. DO: Click the Acquire button to populate the temperature graph. 29. Explain: In this example application, Im performing some logic on my simulated temperature readings to determine if I should alarm my user, based on a limit theyve set. [The default limit value is 65, which should alarm when you click acquire] If I want to qui ckly change my limit value, I can now accomplish that with my mouse wheel. 30. DO: Hover the mouse over the Temperature Limit control and mouse wheel up and down to change the control value. This should also update the cursor value on the graph. [You may also use your touchpads scroll if there is no mouse] 31. Explain: As I mentioned, this program captures UI interaction with events. In 2012, we introduced new templates and sample projects as starting points and references for best practice architectures. Many of these center around using Event Structures to capture user interaction. And, in adding mouse wheel support, weve added an event to capture any mouse wheel interaction with items on our front panel. This allows me to do things like better visualize the data Ive taken by zooming in and out on my graph. 32. DO: Mouse wheel up and down on the Temperature Graph. This should automatically adjust the X and Y scale to simulate a zoom-in and zoom-out effect. 33. Explain: In this case, Ive written a VI to programmatically adjust the scales of my graph. [Just to point out this is not built-in functionality] But, this mouse interaction could be captured to prompt any response your application requires. 34. Explain: I mentioned templates and sample projects, and their use of event structures. We consider it best practice for any application, which must respond to user input, to leverage an event structure. In 2013, were making it simpler to debug event-based programs. We have always been able to use standard LabVIEW debugging tools for events, but this wasnt always sufficient for applications processing a high volume of events. 35. DO: Move the front panel to the left side of the screen. 36. Explain: To better facilitate troubleshooting event-based code, weve introduced the event inspector window. This window allows us to view what events have been processed by our event structure, and also see any that are waiting to be processed. If you are new to event programming, you may be interested to know there is a queue working behind the scenes of an event structure to ensure we do not lose any user interaction wed like to process. 37. DO: Move the block diagram to the right side of the screen. Snap to the right side if using Windows 7. Right-click on the event structure border in the User Interface Loop and click Event Inspector Window . Move the Event Inspector Window to the right so front panel and inspector window are side-by-side. 38. Explain: Notice that as I interact with my application, I see those events being captured in real-time. This can help me better understand how my application is behaving. And, the Event Inspector Window allows me to save my event logs to a text file, for documentation purposes. 39. DO: Disable Log Timeout Events. 40. DO: Increase the Application Timing (ms) to 2000ms or greater. This will slow down the event handling loop to better show events stacking in the event queue. 41. DO: Click Acquire or scroll the mouse wheel on the graph several times to build up a set of events in the event queue, shown in the Event Inspector Window. 42. Explain: If anyone has done event-based programming in past, they may be familiar with user events. These are simply events which are generated by logic in our program, rather than by a UI interaction. If anyone has taken the new Core 3 training, which uses the Queued Message Handler architecture, this leverages user events. 43. Explain: To build on templates and sample projects, and to encourage best practice architecture choices using event-based programming, weve implemented a highly requested feature, which is the ability to specify the priority of a user event. I mentioned there is a queue behind the scenes of an event structure, ensuring we do not lost any UI interaction from one loop iteration to the next. In past, user events always received the same priority as other events. This mean that system-critical tasks, such as a shutdown to avoid overheating, might require an extra queue or notifier in a large application, so they could be given the correct response priority. 44. Explain: In LabVIEW 2013, were helping users leverage the event -based architectures they are implementing, and handle these system-critical tasks with less development effort. 45. DO: Show the block diagram, and make sure the Acquire case is shown in the Processing Loop 46. Explain: In this example application, Im going to alarm my user if the result of my temperature data acquisition and subsequent analysis is above the limit theyve specified. However, Im going to consider this a system-critical response, something that must be processed before all other actions, in the event an operator shutdown was required in response to this excessive temperature. Im facilitating this with a high priority user eve nt. 47. DO: Point out the temperature comparison and the Generate User Event function, which generates a high priority user event when called. 48. Explain: To show the way LabVIEW responds to high priority events, Ive slowed down this application so we can observe what happens in the event queue when this high-priority event occurs. 49. DO: Bring the Event Inspector Window to the front so the UI and Event Inspector are sideby-side. Clear all events and ensure that timeouts are not being logged. Click the Acquire button, and then initiate several mouse wheel events on the Temperature Graph to fill up the event structure queue. 50. Explain: Notice that when a high priority event is fired, it automatically goes to the top of the event structures list of items to process. Thus, we can make efficient use of this already-in- 51. DO: Close the block diagram 52. Explain: We talked about some new development environment enhancements, available in LabVIEW 2013. Id like to shift over toward the subject of deployment. 53. Explain: For anyone needing secure remote access to LabVIEW applications, whether running on a desktop or embedded target, web services are a great option for this communication. They allow SSL security, are industry standard and may be accessed by thin clients over a network, and are IT friendly. Web Services have been available in LabVIEW since 8.6, but weve gone to great lengths to simplify the process of creating, deploying, and debugging web services. 54. Explain: Web Services are now created as a project item. To do this, we simply right-click on My Computer, then choose New and Web Service which creates a new item for us. 55. DO: Steps listed above 56. Explain: Ive already added a web service, to show the new experience. 57. DO: Expand Broadcast Alarm web service project item and Web Resources folder. 58. DO: Open the Web Service VI.vi, move both front panel and block diagram to the right side of the screen and overlay as shown. 59. Explain: In LabVIEW 2013, not only may I create and deploy web services to remotely access my LabVIEW applications, but I am able to debug the web method VIs, comprising that web service, directly from the LabVIEW project. This is an all new feature of LabVIEW 2013 web services. Moreover, youll notice this web service VI uses global variables. LabVIEW 20 13 web services are created in the same namespace as regular LabVIEW applications, and therefore may use global variables and functional global variables to communicate, where before wed have need to use TCP/UDP etc. to facilitate this cross -environment communication. 60. Explain: I can publish my web service, like you might be used to in LabVIEW. 61. DO: Right-click the Broadcast Alarm web service name, choose Application Web Server and show the options to publish and unpublish. 62. Explain: But, in LabVIEW 2013, I can begin a debugging session on my web method VI. I do this by right-clicking my Broadcast Alarm web service project items and choosing start. Notice that my Web method VI is now reserved and waiting to run. 63. DO: Right-click Broadcast Alarm and choose start. Then, point out the VI being reserved. 64. Explain: I may now use standard LabVIEW debugging tools to understand the behavior of my web service. To run the web service, I may obtain the URL of my web service by rightclicking on my web method VI. 65. DO: Turn on highlight execution for the Web Service VI. Right-click on Web Service VI.vi (GET) and choose Show Method URL. Mention that LabVIEW automatically generates the URL for you. [This used to be a pain point for customers] Click on Copy URL and close the HTTP Method URL dialog. 66. DO: Open a browser and snap it to the left-side of the screen so both the browser window and Web Service VI block diagram are shown at a time. Paste the URL into the browsers URL bar and hit enter. 67. Explain: This starts my web method, allows me to observe the web service VIs behavior with standard LabVIEW debugging tools [in this case highlight execution], and ultimately I 68. DO: Close the browser, close the web service VI, right-click on Broadcast Alarm web service item and choose stop to end debug session. 69. DO: Expand the Whats New in LabVIEW 2013 projects build specifications, right-click on LabVIEW 2013 EXE and choose Properties. 70. Explain: We did not forget about deployment when it comes to web services. In past there were many preparations necessary to deploy an application (EXE) which leveraged a web service. In 2013, weve streamlined this process by including a web service category in EXE build specs. Now, you can choose any web service included in your project, select SSL options, and build this web service into your EXE. It will then deploy automatically when the EXE runs, making remote access to both desktop and embedded LabVIEW applications a breeze. 71. DO: Close the EXE build. 72. Explain: Now Im going to switch back to slides and show you a few more deployment related features weve added in LabVIEW 2013.
https://www.scribd.com/document/220985274/Manual-Labview-2014
CC-MAIN-2019-35
refinedweb
2,881
62.07
Emgu CV is a .NET wrapper for OpenCV (Open Source Computer Vision Library) which is a collection of over 2500 algorithms focused on real-time image processing and machine learning. OpenCV lets you write software for: Open CV is written in highly optimized C/C++, supports multi-core execution and heterogeneous execution platforms (CPU, GPU, DSP...) thanks to OpenCL. The project was launched in 1999 by Intel Research and is now actively developed by open source community members and contributors from companies like Google, Microsoft or Honda... My experience with Emgu CV/OpenCV comes mostly from working on paintball turret project (which I use to have a break from "boring" banking stuff at work). I'm far from computer vision expert but I know enough to teach you how to detect a mini quadcopter flying in a room: In the upper-left corner, you can see frame captured from video file, following that is the background frame (static background and camera makes our task simpler)... Next to it are various stages of image processing run before drone (contour) detection is executed. The last frame shows the original frame with drone position marked. Job done! Oh, and if you are wondering what is the "snow" seen in the video: these are some particles I put to make the video a bit more "noisy"... I assume that you know a bit about C# programming, but are completely new to Emgu CV/OpenCV. By the end of this tutorial, you will know how to: Sounds interesting? Read on! I plan to give detailed description of the whole program (don't worry: it's just about 200 lines) but if you would like to jump straight to the code visit this GitHub repository:. It's a simple console app - I've put everything into Program.cs so you can't get lost! Mind that because Emgu CV/OpenCV binaries are quite large these are not included in the repo. This should not be a problem because Visual Studio 2017 should be able to automatically download (restore) the packages... Here you can download the video I've used for testing: (4.04 MB, MPEG4 H264 640x480 25fps). To start, let's use Visual Studio Community 2017 to create new console application: Now we need to add Emgu CV to our project. The easiest way to do it is to use Nuget to install Emgu CV package published by Emgu Corporation. To do so, run "Install-Package Emgu.CV" command in Package Manager Console or utilize Visual Studio UI: If all goes well, package.config and DLL references should look like this (you don't have to worry about ZedGraph): Now, we are ready to test if OpenCV's magic is available to us through Emgu CV wrapper library. Let's do it by creating a super simple program that loads an image file and shows it in a window with obligatory "Hello World!" title: Hello World! using Emgu.CV; // Contains Mat and CvInvoke classes class Program { static void Main(string[] args) { Mat picture = new Mat(@"C:\Users\gby\Desktop\Krakow_BM.jpg"); // Pick some path on your disk! CvInvoke.Imshow("Hello World!", picture); // Open window with image CvInvoke.WaitKey(); // Render image and keep window opened until any key is pressed } } Run it and you should see a window with the image you've selected. Here's what I got - visit Kraków if you like my picture :) The above code loads a picture from a file into Mat class instance. Mat is a n-dimensional dense array containing pointer to image matrix and a header describing this matrix. It supports a reference counting mechanism that saves memory if multiple image processing operations act on same data... Don't worry if it sounds a bit confusing. All you need to know now is that we can load images (from files, webcams, video frames, etc.) into Mat objects. If you are curious, read this nice description of Mat. Mat The other interesting thing you can see in the code is the CvInvoke class. You can use it to call OpenCV functions from your C# application without dealing with complexity of operating native code and data structures from managed code - Emgu the wrapper will do it for you through PInvoke mechanism. Ok, so now, you have some idea about what Emgu CV/OpenCV libraries are and how to bring them into your application. Next post coming soon... Update: Part 2 is.
https://www.codeproject.com/Articles/1199224/Detecting-a-Drone-OpenCV-in-NET-for-Beginners-Emgu
CC-MAIN-2018-22
refinedweb
741
62.98
#include <timezone.h> Detailed Description Time zone information. This class stores information about a time zone. Definition at line 34 of file timezone.h. Constructor & Destructor Documentation Construct invalid time zone. Definition at line 48 of file timezone.cpp. Construct time zone. - Parameters - Definition at line 53 of file timezone.cpp. Copy constructor. Definition at line 58 of file timezone.cpp. Destroys the time zone. Definition at line 63 of file timezone.cpp. Member Function Documentation Return, if this time zone object is valid. Definition at line 78 of file timezone.cpp. Return offset in minutes relative to UTC. Definition at line 73 of file timezone.cpp. Set time zone offset relative to UTC. - Parameters - Definition at line 67 of file timezone.cpp. Return string representation of time zone offset. Definition at line 114 of file timezone.cpp. Friends And Related Function Documentation Serializes the timezone object into the stream. Initializes the timezone object from the stream..
https://api.kde.org/kdepim/kcontacts/html/classKContacts_1_1TimeZone.html
CC-MAIN-2019-26
refinedweb
157
55.5
The process of scheduling a custom task includes two steps: Writing the code that performs the required actions and registering it in the CMS as a scheduled task. Each scheduled task must be defined by a class that implements the CMS.Scheduler.ITask interface. To integrate this type of class into the application, you can add a new assembly (Class library) to your project and include it there. In this case, it is necessary to add the appropriate references to both the assembly and the main CMS project. Alternatively, you can define scheduled tasks in App_Code without the need to compile an assembly. This approach is described in the example below: 1. Open your web project, expand the App_Code folder (or Old_App_Code if you installed Kentico CMS as a web application), navigate to the Samples\Classes folder and edit the MyTask.cs file. This file already contains a sample class that follows the basic structure required for a scheduled task. Modify the code according to the following: [C#] The Execute method must always be included when writing a scheduled task. It is called whenever the given task is executed, so it must contain all code implementing the required functionality. In this example, the task only creates a record in the application's event log so that you can confirm that it is being executed. The TaskInfo parameter of the method can be used to access the data fields of the corresponding scheduled task object. As you can see in the code above, the content of the TaskData field is added into the details of the event log entry. The string returned by the method will be displayed in the administration interface when the task is executed and can be used to describe the result. You can leave it as null in this case. Save the changes made to the file. 2. Next, it is necessary to ensure that the MyTask class is loaded when the given scheduled task is executed. Expand the Samples\Modules folder and open the SampleClassLoaderModule.cs file, which demonstrates how this can be done. You do not have to make any modifications for the purposes of this example, since the sample class loader handles the Custom.MyTask class by default. An object of the appropriate class is assigned by the ClassHelper_OnGetCustomClass event handler: [C#] In the case of scheduled tasks, the value of the ClassName property of the handler's ClassEventArgs parameter will match the Task class name specified for the given task (Custom.MyTask in this example). The value is checked to identify which specific task was executed, so that an instance of the correct class can be created and passed on. Build the project if you installed it as a web application. Go to Site Manager -> Administration -> Scheduled tasks and select the Site for which the task should be scheduled (or (global) if it should be performed for all sites or affect global objects). Next, click the New task link and fill in the properties of the task according to the following: •Task display name - Custom task; sets a name for the task that will be shown in the administration interface. •Task name - Custom_task; sets a name that will serve as an identifier for the scheduled task. •Task assembly name: App_Code; this field must contain the name of the assembly where the task is implemented. •Task class name: Custom.MyTask; specifies the exact class (including any namespaces) that defines the functionality of the scheduled task. •Task interval - sets the time interval between two executions of the task. This does not ensure that the task will be executed at the exact time, only that it will be considered ready to be executed. The precise execution time depends on the general settings of the scheduler. •Task data - data which should be provided to the assembly. This field can be accessed from the code of the task, so it can be used as a parameter to easily modify the task without having to edit its implementation. In this example, any entered content will be included in the details of the event log entry created by the task. •Task enabled: yes (checked); this field indicates if the task should be scheduled. •Delete task after last run - indicates if the task should be deleted after its final run (applicable if the task is set to run only once). •Run task in separate thread - indicates if the task should be executed in a separate thread in order to improve application performance. •Use. •Server name - name of the web farm server where the task should be executed. This field is applicable only if your application runs in a web farm environment. •Create tasks for all web farm servers - if checked, tasks will be created for all web farm servers and the Server name field will be grayed out. This field is displayed only if your application runs in a web farm environment. Click OK. The task will now be executed regularly according to the specified interval. To check the results once the task is enabled, go to Site Manager -> Administration -> Event log and look for entries with MyTask as their Source.
http://devnet.kentico.com/docs/6_0/devguide/scheduling_a_custom_code.htm
CC-MAIN-2018-05
refinedweb
860
61.16
Garrett Cooper <yanegomi at gmail.com> added the comment: My initial analysis was incorrect after talking with the bash(1) folks. test(1) is doing things wrong too: case FILEX: /* XXX work around eaccess(2) false positives for superuser */ if (eaccess(nm, X_OK) != 0) return 0; if (S_ISDIR(s.st_mode) || geteuid() != 0) return 1; return (s.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH)) != 0; So it looks like test(1) is broken as well (doesn't check for ACLs, or MAC info). Interesting why it's only implemented for X_OK though... Based on this analysis, I'd say that access(2) is broken on FreeBSD and needs to be fixed. ---------- _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
https://mail.python.org/pipermail/docs/2010-July/000789.html
CC-MAIN-2016-44
refinedweb
116
69.68
Synchronous and asynchronous code are not directly compatible in that the functions must be called differently depending on the type. This limits what can be done, for example in how Quart interacts with Flask extensions and any effort to make Flask directly asynchronous. In my opinion it is much easier to start with an asynchronous codebase that calls synchronous code than vice versa in Python. I will try and reason why below. This is mostly easy in that you can either call, or via a simple wrapper await a synchronous function, async def example(): sync_call() await asyncio.coroutine(sync_call)() whilst this doesn’t actually change the nature, the call is synchronous, it does work. This is where things get difficult, as it is only possible to create a single event loop. Hence this can only be used once, def example(): loop = asyncio.get_event_loop() loop.run_until_complete(async_call()) therefore if you are not at the very outer scope it isn’t really possible to call asynchronous code from a synchronous function. This is problematic when dealing with Flask extensions as for example the extension may have something like, @app.route('/') def route(): data = request.form return render_template_string("{{ name }}", name=data['name']) whilst the route function can be wrapped with the asyncio.coroutine function and hence awaited, there is no (easy?) way to insert the await before the request.form and render_template calls. asyncio.coroutine await request.form render_template It is for this reason that a proxy object, FlaskRequestProxy, and render, render_template() functions are created for the Flask extensions. The former adding synchronous request methods and the other providing synchronous functions. FlaskRequestProxy render_template() Quart monkey patches a sync_wait method onto the base event loop allowing for definitions such as, sync_wait from quart.templating import render_template as quart_render_template def render_template(*args): return asyncio.sync_wait(quart_render_template(*args))
https://pgjones.gitlab.io/quart/discussion/async_compatibility.html
CC-MAIN-2020-40
refinedweb
302
57.16
Finally! I got it! Thanks, Syed! Finally! I got it! Thanks, Syed! What exact code should i put after that statement? Just console.nextLine(); ? Sorry, I don't get it. Where exactly will I add console.nextLine() ? Instead of continuing the program because I entered YES, it just stopped. ================================================ Programmer: ID No.: About: This program is a simple payroll... Syed asked me to post it. Maybe it could help you, guys to determine what's wrong with my program. I tried this one: while (setValue.equalsIgnoreCase("Y")) but instead of continuing the program because i entered "Yes", the program ends. and when I tried this one: ... How will I apply .equals() in the program? do{ System.out.print("\nPlease enter employee name: "); employeeName = console.nextLine(); System.out.print(""); System.out.print("\nGender(M/m or F/f): "); gender =... I'm sorry. I'm just a newbie. And she just taught us how to use, If else statements, Switch and Looping. I don't know what array, or String API is. Enter the telephone number (in letters): GO TOHELL The number is: 46-86435 <--- instead of 46-86435, it should be 468-6435 --- Update --- import java.util.Scanner; public class... Can't get the correct program. To make telephone numbers easier to remember, some companies use letters to show their telephone number. For example, the telephone number 438-5626 can be shown as...
http://www.javaprogrammingforums.com/search.php?s=ed1bfc799063fb27d310fd9954d0b19d&searchid=784090
CC-MAIN-2014-10
refinedweb
232
71.61
Now that we have the freedom to attack, you might be thinking it would be nice if there were more options. In this lesson, we will add code so that the enemy opponent can actually play cards and attack us too. This will give a greater variety of move options, and add a bit of strategy to the game, all while giving a great boost to the fun factor. In my opinion, this is where it actually starts feeling like a real game! Go ahead and open the project from where we left off, or download the completed project from last week here. I have also prepared an online repository here for those of you familiar with version control. Card System We have some nice structure in place for knowing what cards can attack or be targeted on the battlefield, but we don’t have anything similar in place for knowing which cards in a player’s hand are considered playable. In order to simplify the job of the A.I. to pick a card to play, let’s go ahead and have another system pre-determine all playable cards. As a bonus, this code would also then be re-usable in case you wanted to highlight the playable cards in the hand like we highlight minions that can attack. I’ll leave that last part as an exercise to the reader. public List<Card> playable = new List<Card> (); public void Refresh () { var match = container.GetMatch (); playable.Clear (); foreach (Card card in match.CurrentPlayer[Zones.Hand]) { var playAction = new PlayCardAction (card); if (playAction.Validate ()) playable.Add (card); } } I added a new field called “playable” which is a “List” that holds “Card” instances. It will be updated by the system to hold cards that are able to be played from the player’s hand. We will update the list by calling the “Refresh” method. Here we clear the list so that we can start with a clean slate, and then simply re-add cards that are determined to be playable. We loop over each card in the current player’s hand, create a new “PlayCardAction” based on the card, and then validate the action. Assuming the action passes the validation check we can add the card to our list of playable cards. Note that the created action isn’t actually performed unless it is passed along to the action system, so we were able to use it here as an opportunity to reuse all of the validation logic that is already in place. Enemy System Now we have enough structure in place to begin implementing our rudimentary enemy A.I. We wont be creating anything challenging (unless you are just really bad at playing games in this genre) because the actions will be entirely performed at random. The goal is simply that it will actually take actions when it can, and therefore we will get a better sense of what a real game will feel like. An A.I. that would be considered ready for a complete game is much more complex – there are a variety of algorithms you could try, from a simple list of rules that give weights to certain choices, to popular patterns like Minimax or Monte Carlo Tree Search. There are even options for Machine Learning. I am really interested in the Machine Learning route, but find it a very challenging topic – perhaps I will manage to wrap my head around it in the future. public class EnemySystem : Aspect { public void TakeTurn () { if (PlayACard () || Attack ()) return; container.GetAspect<MatchSystem> ().ChangeTurn (); } bool PlayACard () { var system = container.GetAspect<CardSystem> (); if (system.playable.Count == 0) return false; var card = system.playable.Random (); var action = new PlayCardAction (card); container.Perform (action); return true; } bool Attack () { var system = container.GetAspect<AttackSystem> (); if (system.validAttackers.Count == 0 || system.validTargets.Count == 0) return false; var attacker = system.validAttackers.Random (); var target = system.validTargets.Random (); var action = new AttackAction (attacker, target); container.Perform (action); return true; } } The system is pretty simple. It inherits from “Aspect” so that we can attach it to our main system container. There is only a single public method called “TakeTurn” which will be invoked during a “PlayerIdleState” as it enters, assuming of course that the current player is controlled by the computer. When the system is triggered, it will attempt to play a card. If no cards can be played, it will attempt to attack, and if no attack can be initiated, then it will change turns. The “PlayACard” method grabs a reference to our “CardSystem”. It will know whether or not it can take an action here because of the count of playable cards held by this system. Assuming there are playable cards, it will pick one at random, create a “PlayCardAction” for the card, and pass it along to the ActionSystem to be performed. Note that we used an extension on the container rather than dealing with the ActionSystem directly. The “Attack” method grabs a reference to our “AttackSystem”. It will know whether or not it can take an action here because of the count of the valid attackers and valid targets lists. Assuming there is at least one entry in each, we will pick randomly from each, and construct a new “AttackAction” and cause it to be performed. Player Idle State The “PlayerIdleState” currently calls a “Temp_AutoChangeTurnForAI” method so that our player can continue to play the game. Now that we have a system to handle A.I., we can remove this temporary method, and replace the statement that invokes it with the following: container.GetAspect<CardSystem> ().Refresh (); if (container.GetMatch().CurrentPlayer.mode == ControlModes.Computer) container.GetAspect<EnemySystem> ().TakeTurn (); Just like we had used the “Enter” method to refresh the “AttackSystem” we will also need to refresh our “CardSystem” before we attempt to trigger the A.I. Next we check the “mode” of the current player to verify that the computer should be in control. If so, we can call the “TakeTurn” method on our “EnemySystem”. Attack System As I was putting together this lesson, a few things jumped out at me as not having been fully implemented. The first was the implementation of an attack. I had only applied damage from the attacker to the target. However, the target should have the opportunity to defend itself, and therefore apply some counter attack damage. I fixed that by refactoring the “OnPerformAttackAction” and adding the following: void OnPerformAttackAction (object sender, object args) { var action = args as AttackAction; var attacker = action.attacker as ICombatant; attacker.remainingAttacks--; ApplyAttackDamage (action); ApplyCounterAttackDamage (action); } void ApplyAttackDamage (AttackAction action) { var attacker = action.attacker as ICombatant; var target = action.target as IDestructable; var damageAction = new DamageAction (target, attacker.attack); container.AddReaction (damageAction); } void ApplyCounterAttackDamage (AttackAction action) { var attacker = action.target as ICombatant; var target = action.attacker as IDestructable; if (attacker != null && target != null) { var damageAction = new DamageAction (target, attacker.attack); container.AddReaction (damageAction); } } Minion System I also noted that my constraint on the max number of minions on the table had not been implemented. This was easily resolved by observing the “PlayCardAction” validation notification: // Add to Awake this.AddObserver (OnValidatePlayCard, Global.ValidateNotification<PlayCardAction> ()); // Add to Destroy this.RemoveObserver (OnValidatePlayCard, Global.ValidateNotification<PlayCardAction> ()); // Notification Handler void OnValidatePlayCard (object sender, object args) { var action = sender as PlayCardAction; var cardOwner = container.GetMatch ().players [action.card.ownerIndex]; if (action.card is Minion && cardOwner[Zones.Battlefield].Count >= Player.maxBattlefield) { var validator = args as Validator; validator.Invalidate (); } } As a reminder, the notification observer for a validation notification should not list the “container” as the sender to observe, because the action itself serves as the sender. In the notification handler, we grab a reference to the player that owns the card being summoned (note that I don’t grab the “current player” because reactions / abilities in the future could cause the opponent to summon a card as well). Next we check to see if the card about to be played is a type of Minion, and if so that there is still room on the battlefield. If not, we will need to invalidate the action. Death Action After seeing my enemy opponent summon minions and giving me some new targets, I quickly noticed that I could attack them, but there was no real point. Reducing their hitpoints to zero or less did nothing to hinder their ability to attack. Even worse, if you filled your battlefield with low-cost cards, you would fill up your battlefield and not be able to summon high-cost cards. Clearly, we need a way to remove minions from the battlefield. Let’s kill them. We will add a new context object that marks cards for death like this: public class DeathAction : GameAction { public Card card; public DeathAction (Card card) { this.card = card; } } Death System Let’s add a new system to handle our new “Death Action”. I’ll be really clever and name it the “Death System”. Like usual it inherits from “Aspect” and it will also implement the “IObserve” interface. public class DeathSystem : Aspect, IObserve { // Add code here... } We will be interested in two notifications. The first is posted by the ActionSystem after fully performing a “root” action, and will be used as an opportunity to look for any mortally wounded cards. The second is the “perform” notification of the death action itself. public void Awake () { this.AddObserver (OnDeathReaperNotification, ActionSystem.deathReaperNotification); this.AddObserver (OnPerformDeath, Global.PerformNotification<DeathAction> (), container); } public void Destroy () { this.RemoveObserver (OnDeathReaperNotification, ActionSystem.deathReaperNotification); this.RemoveObserver (OnPerformDeath, Global.PerformNotification<DeathAction> (), container); } The handler for the death reaper notification will loop over all of the cards on the battle field for each player. Any that should be reaped will be reaped: void OnDeathReaperNotification (object sender, object args) { var match = container.GetMatch (); foreach (Player player in match.players) { foreach (Card card in player[Zones.Battlefield]) { if (ShouldReap (card)) TriggerReap (card); } } } The “ShouldReap” method looks at a card and attempts to cast it as a type of IDestructable. If a card is destructable, and also has its hitpoints reduced to zero or less, then it is determined that it should be reaped. bool ShouldReap (Card card) { var target = card as IDestructable; return target != null && target.hitPoints <= 0; } The “TriggerReap” method simply creates a new “DeathAction” from a given card and adds it as a reaction to the current action. void TriggerReap (Card card) { var action = new DeathAction (card); container.AddReaction (action); } The handler for the perform phase of the death action provides us an opportunity to actually implement the concept of death to a minion. All it really means is that the card’s zone should change to the “Graveyard”: void OnPerformDeath (object sender, object args) { var action = args as DeathAction; var cardSystem = container.GetAspect<CardSystem> (); cardSystem.ChangeZone (action.card, Zones.Graveyard); } Game Factory We’ve created two new systems, but need to make sure that they get instantiated and added to our system container. Add the following statements to the “Create” method: game.AddAspect<DeathSystem> (); game.AddAspect<EnemySystem> (); Table View One final step of polish and this lesson will be complete. Let’s provide a “viewer” for the death of a minion. We will use it as an opportunity to shrink the card to nothing, as well as to update the layout of all the other cards in the battlefield so that there are no gaps. We can use the OnEnable and OnDisable methods to attach and remove a new observer of the “prepare” phase of the DeathAction, and then use the notification handler to attach the viewer method to the “perform” phase. Note that we only add the “viewer” if the table view’s owning player matches the card’s owning player: // Add to OnEnable this.AddObserver (OnPrepareDeath, Global.PrepareNotification<DeathAction> ()); // Add to OnDisable this.RemoveObserver (OnPrepareDeath, Global.PrepareNotification<DeathAction> ()); // Notification Handler void OnPrepareDeath (object sender, object args) { var action = args as DeathAction; if (GetComponentInParent<PlayerView> ().player.index == action.card.ownerIndex) action.perform.viewer = ReapMinion; } The viewer method itself must grab the game object that currently represents the card that will be reaped. We can use the “GetMatch” method for this. Now we can use a tweener to scale the object to zero to animate it disappearing. I don’t yield as this plays, because I am happy for it to play alongside the next animation which will be when the other minions are being layed out. Next, we need to remove the MinionView component from the list of minions – we do this before calling “LayoutMinions” so that any gaps will be filled. Once the Layout animation has completed, I disable the cards view, reset its scale, and enqueue it in the pooler so it can be used again in the future. public IEnumerator ReapMinion (IContainer game, GameAction action) { var reap = action as DeathAction; var view = GetMatch (reap.card); view.transform.ScaleTo (Vector3.zero); minions.Remove (view.GetComponent<MinionView> ()); var tweener = LayoutMinions (); while (tweener != null) yield return null; view.SetActive (false); view.transform.localScale = Vector3.one; minionPooler.Enqueue(view.GetComponent<Poolable> ()); } Demo Play the game. The computer will now play cards that get summoned to the table. If it has an opportunity to attack, it will. Note that neither side should be able to summon more minions than are allowed by the Player’s const of “maxBattlefield”. Any minions that have their hit points reduced to zero will also be removed from the table. If you play strategically, you should always be able to win. There is still a small element of luck, due to the order each player draws playable cards, as well as by the potential of the computer to randomly make a good or bad move. The game is already starting to feel fun! Summary Originally I set out merely to allow the computer to play cards and attack. Along the way we added more complete implementations of some of our existing features, such as clamping the number of minions that can be summoned at a time, and making sure minions can counter attack. I also decided we should add a new feature by removing minions from the table when they die. All together, the game now feels like an actual game.!
https://theliquidfire.com/2017/12/26/make-a-ccg-enemy-a-i/
CC-MAIN-2021-10
refinedweb
2,351
56.15
the domain of just one content creator in which the user will fill out a form before reading the story — creating odd and often funny stories. This type of experience was popularized as “Madlibs.” - Generate your own madlibs in the demo; - Look through the final code on Github; - Get a fully-built version set up in your accounts. How The Generator Will Work An editor can create a series of madlibs that an end-user can fill out and save a copy with their unique answers. The editor will be working with the Sanity Studio inside a rich-text field that we’ll craft to provide additional information for our front-end to build out forms. For the editor, it will feel like writing standard paragraph content. They’ll be able to write like they’re used to writing. They can then create specific blocks inside their content that will specify a part of speech and display text. The front-end of the application can then use that data to both display the text and build a form. We’ll use 11ty to create the frontend with some small templates. The form that is built will display to the user before they see the text. They’ll know what type of speech and general context for the phrases and words they can enter. After the form is submitted, they’ll be given their fully formed story (with hopefully hilarious results). This creation will only be set within their browser. If they wish to share it, they can then click the “Save” button. This will submit the entire text to a serverless function in Netlify to save it to the Sanity data store. Once that has been created, a link will appear for the user to view the permanent version of their madlib and share it with friends. Since 11ty is a static site generator, we can’t count on a site rebuild to generate each user’s saved Madlib on the fly. We can use 11ty’s new Serverless mode to build them on request using Netlify’s On-Demand Builders to cache each Madlib. The Tools Sanity.io Sanity.io is a unified content platform that believes that content is data and data can be used as content. Sanity pairs a real-time data store with three open-source tools: a powerful query language (GROQ), a CMS (Sanity Studio), and a rich-text data specification (Portable Text). Portable Text Portable Text is an open-source specification designed to treat rich text as data. We’ll be using Portable Text for the rich text that our editors will enter into a Sanity Studio. Data will decorate the rich text in a way that we can create a form on the fly based on the content. 11ty And 11ty Serverless 11ty is a static site generator built in Node. It allows developers to ingest data from multiple sources, write templates in multiple templating engines, and output simple, clean HTML. In the upcoming 1.0 release, 11ty is introducing the concept of 11ty Serverless. This update allows sites to use the same templates and data to render pages via a serverless function or on-demand builder. 11ty Serverless begins to blur the line between “static site generator” and server-rendered page. Netlify On-Demand Builders Netlify has had serverless functions as part of its platform for years. For example, an “On-Demand Builder” is a serverless function dedicated to serving a cached file. Each builder works similarly to a standard serverless function on the first call. Netlify then caches that page on its edge CDN for each additional call. Building The Editing Interface And Datastore Before we can dive into serverless functions and the frontend, it would be helpful to have our data set up and ready to query. To do this, we’ll set up a new project and install Sanity’s Studio (an open-source content platform for managing data in your Sanity Content Lake). To create a new project, we can use Sanity’s CLI tools. First, we need to create a new project directory to house both the front-end and the studio. I’ve called mine madlibs. From inside this directory in the command line, run the following commands: npm i -g @sanity/cli sanity init The sanity init command will run you through a series of questions. Name your project madlibs, create a new dataset called production, set the “output path” to studio, and for “project template,” select “Clean project with no predefined schemas.” The CLI creates a new Sanity project and installs all the needed dependencies for a new studio. Inside the newly created studio directory, we have everything we need to make our editing experience. Before we create the first interface, run sanity start in the studio directory to run the studio. Creating The madlib Schema A set of schema defines the studio’s editing interface. To create a new interface, we’ll create a new schema in the schema folder. // madlibs/studio/schemas/madlib.js } }, ] } The schema file is a JavaScript file that exports an object. This object defines the data’s name, title, type, and any fields the document will have. In this case, we’ll start with a title string and a slug that can be generated from the title field. Once the file and initial code are created, we need to add this schema to our schema.js file. // /madlibs/studio/schema/schema.js // First, we must import the schema creator import createSchema from 'part:@sanity/base/schema-creator' // Then import schema types from any plugins that might expose them import schemaTypes from 'all:part:@sanity/base/schema-type' // Imports our new schema import madlib from './madlib' //([ // document // adds the schema to the list the studio will display madlib, ]) }) Next, we need to create a rich text editor for our madlib authors to write the templates. Sanity has a built-in way of handling rich text that can convert to the flexible Portable Text data structure. To create the editor, we use an array field that contains a special schema type: block. The block type will return all the default options for rich text. We can also extend this type to create specialty blocks for our editors. } }, { title: 'Madlib Text', name: 'text', type: 'array', of: [ { type: 'block', name: 'block', of: [ // A new type of field that we'll create next { type: 'madlibField' } ] }, ] }, ] } This code will set up the Portable Text editor. It builds various types of “blocks.” Blocks roughly equate to top-level data in the JSON data that Portable Text will return. By default, standard blocks take the shape of things like paragraphs, headers, lists, etc. Custom blocks can be created for things like images, videos, and other data. For our madlib fields, we want to make “inline” blocks — blocks that flow within one of these larger blocks. To do that, the block type can accept its own of array. These fields can be any type, but we’ll make a custom type and add it to our schema in our case. Creating A Custom Schema Type For The Madlib Field To create a new custom type, we need to create a new file and import the schema into schema.js as we did for a new document type. Instead of creating a schema with a type of document, we need to create one of type: object. This custom type needs to have two fields: the display text and the grammar type. By structuring the data this way, we open up future possibilities for inspecting our content. Alongside the data fields for this type, we can also specify a custom preview to show more than one field displayed in the rich text. To make this work, we define a React component that will accept the data from the fields and display the text the way we want it. // /madlibs/studio/schemas/object/madLibField.js import React from 'react' // A React Component that takes hte value of data // and returns a simple preview of the data that can be used // in the rich text editor function madlibPreview({ value }) { const { text, grammar } = value return ( {text} ({grammar}) ); } export default { title: 'Madlib Field Details', name: 'madlibField', type: 'object', fields: [ { name: 'displayText', title: 'Display Text', type: 'string' }, { name: 'grammar', title: 'Grammar Type', type: 'string' } ], // Defines a preview for the data in the Rich Text editor preview: { select: { // Selects data to pass to our component text: 'displayText', grammar: 'grammar' }, // Tells the field which preview to use component: madlibPreview, }, } Once that’s created, we can add it to our schemas array and use it as a type in our Portable Text blocks. // /madlibs/studio/schemas/schema.js // First, we must import the schema creator import createSchema from 'part:@sanity/base/schema-creator' // Then import schema types from any plugins that might expose them import schemaTypes from 'all:part:@sanity/base/schema-type' import madlib from './madlib' // Import the new object import madlibField from './objects/madlibField' //([ // documents madlib, //objects madlibField ]) }) Creating The Schema For User-generated Madlibs Since the user-generated madlibs will be submitted from our frontend, we don’t technically need a schema for them. However, if we create a schema, we get an easy way to see all the entries (and delete them if necessary). We want the structure for these documents to be the same as our madlib templates. The main differences in this schema from our madlib schema are the name, title, and, optionally, making the fields read-only. // /madlibs/studio/schema/userLib.js export default { name: 'userLib', title: 'User Generated Madlibs', type: 'document', fields: [ { name: 'title', title: 'Title', type: 'string', readOnly: true }, { title: 'Slug', name: 'slug', type: 'slug', readOnly: true, options: { source: 'title', maxLength: 200, // // will be ignored if slugify is set }, }, { title: 'Madlib Text', name: 'text', type: 'array', readOnly: true, of: [ { type: 'block', name: 'block', of: [ { type: 'madlibField' } ] }, ] }, ] } With that, we can add it to our schema.js file, and our admin is complete. Before we move on, be sure to add at least one madlib template. I found the first paragraph of Moby Dick worked surprisingly well for some humorous results. Building The Frontend With 11ty To create the frontend, we’ll use 11ty. 11ty is a static site generator written in and extended by Node. It does the job of creating HTML from multiple sources of data well, and with some new features, we can extend that to server-rendered pages and build-rendered pages. Setting Up 11ty First, we’ll need to get things set up. Inside the main madlibs directory, let’s create a new site directory. This directory will house our 11ty site. Open a new terminal and change the directory into the site directory. From there, we need to install a few dependencies. // Create a new package.json npm init -y // Install 11ty and Sanity utilities npm install @11ty/eleventy@beta @sanity/block-content-to-html @sanity/client Once these have been installed, we’ll add a couple of scripts to our package.json // /madlibs/site/package.json "scripts": { "start": "eleventy --serve", "build": "eleventy" }, Now that we have a build and start script, let’s add a base template for our pages to use and an index page. By default, 11ty will look in an _includes directory for our templates, so create that directory and add a base.njk file to it. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Madlibs</title> {# Basic reset #} <link rel="stylesheet" href="" /> </head> <body> <nav class="container navigation"> <a class="logo" href="/">Madlibs</a> </nav> <div class="stack container bordered"> {# Inserts content from a page file and renders it as html #} {{ content | safe }} </div> {% block scripts %} {# Block to insert scripts from child templates #} {% endblock %} </body> </html> Once we have a template, we can create a page. First, in the root of the site directory, add an index.html file. Next, we’ll use frontmatter to add a little data — a title and the layout file to use. --- title: Madlibs layout: 'base.njk' --- <p>Some madlibs to take your mind off things. They're stored in <a href="">Sanity.io</a>, built with <a href="">11ty</a>, and do interesting things with Netlify serverless functions.</p> Now you can start 11ty by running npm start in the site directory. Creating Pages From Sanity Data Using 11ty Pagination Now, we want to create pages dynamically from data from Sanity. To do this, we’ll create a JavaScript Data file and a Pagination template. Before we dive into those files, we need to create a couple of utilities for working with the Sanity data. Inside the site directory, let’s create a utils directory. The first utility we need is an initialized Sanity JS client. First, create a file named sanityClient.js in the new utils directory. // /madlibs/site/utils/sanityClient.js' const sanityClient = require('@sanity/client') module.exports = sanityClient({ // The project ID projectId: '<YOUR-ID>', // The dataset we created dataset: 'production', // The API version we want to use // Best practice is to set this to today's date apiVersion: '2021-06-07', // Use the CDN instead of fetching directly from the data store useCdn: true }) Since our rich text is stored as Portable Text JSON, we need a way to convert the data to HTML. We’ll create a utility to do this for us. First, create a file named portableTextUtils.js in the utils directory. For Sanity and 11ty sites, we typically will want to convert the JSON to either Markdown or HTML. For this site, we’ll use HTML to have granular control over the output. Earlier, we installed @sanity/block-content-to-html, which will help us serialize the data to HTML. The package will work on all basic types of Portable Text blocks and styles. However, we have a custom block type that needs a custom serializer. // Initializes the package const toHtml = require('@sanity/block-content-to-html') const h = toHtml.h; const serializers = { types: { madlibField: ({ node }) => { // Takes each node of `type` `madlibField` // and returns an HTML span with an id, class, and text return h('span', node.displayText, { id: node._key, className: 'empty' }) } } } const prepText = (data) => { // Takes the data from a specific Sanity document // and creates a new htmlText property to contain the HTML // This lets us keep the Portable Text data intact and still display HTML return { ...data, htmlText: toHtml({ blocks: data.text, // Portable Text data serializers: serializers // The serializer to use }) } } // We only need to export prepText for our functions module.exports = { prepText } The serializers object in this code has a types object. In this object, we create a specialized serializer for any type. The key in the object should match the type given in our data. In our case, this is madlibField. Each type will have a function that returns an element written using hyperscript functions. In this case, we create a span with children of the displayText from the current data. Later we’ll need unique IDs based on the data’s _key, and we’ll need a class to style these. We provide those in an object as the third argument for the h() function. We’ll use this same serializer setup for both our madlib templates and the user-generated madlibs. Now that we have our utilities, it’s time to create a JavaScript data file. First, create a _data in the site directory. In this file, we can add global data to our 11ty site. Next, create a madlibs.js file. This file is where our JavaScript will run to pull each madlib template. The data will be available to any of our templates and pages under the madlibs key. // Get our utilities const client = require('../utils/sanityClient') const {prepText} = require('../utils/portableTextUtils') // The GROQ query used to find specific documents and // shape the output const query = `*[_type == "madlib"]{ title, "slug": slug.current, text, _id, "formFields": text[]{ children[_type == "madlibField"]{ displayText, grammar, _key } }.children[] }` module.exports = async function() { // Fetch data based on the query const madlibs = await client.fetch(query); // Prepare the Portable Text data const preppedMadlib = madlibs.map(prepText) // Return the full array return preppedMadlib } To fetch the data, we need to get the utilities we just created. The Sanity client has a fetch() method to pass a GROQ query. We’ll map over the array of documents the query returns to prepare their Portable Text and then return that to 11ty’s data cascade. The GROQ query in this code example is doing most of the work for us. We start by requesting all documents with a _type of madlib from our Sanity content lake. Then we specify which data we want to return. The data starts simply: we need the title, slug, rich text, and id from the document, but we also want to reformat the data into a set of form fields, as well. To do that, we create a new property on the data being returned: formFields. This looks at the text data (a Portable Text array) and loops over it with the [] operator. We can then build a new project on this data like we’re doing with the entire document with the {} operator. Each text object has a children array. We can loop through that, and if the item matches the filter inside the [], we can run another projection on that. In this case, we’re filtering all children that have a _type == "madlibField". In other words, any inline block that has an item with the type we created. We need the displayText, grammar, and _key for each of these. This will return an array of text objects with the children matching our filter. We need to flatten this to be an array of children. To do this, we can add the .children[] after the projects. This will return a flat array with just the children elements we need. This gives us all the documents in an array with just the data we need (including newly reformatted items). To use them in our 11ty build, we need a template that will use Pagination. In the root of the site, create a madlib.njk file. This file will generate each madlib page from the data. --- layout: 'base.njk' pagination: data: madlibs alias: madlib size: 1 permalink: "madlibs/{{ madlib.slug | slug }}/index.html" --- In the front matter of this file, we specify some data 11ty can use to generate our pages: layout The template to use to render the page. pagination An object with pagination information. pagination.data The data key for pagination to read. pagination.alias A key to use in this file for ease. pagination.size The number of madlibs per page (in this case, 1 per page to create individual pages). permalink The URLs at which each of these should live (can be partially generated from data). With that data in place, we can specify how to display each piece of data for an item in the array. --- layout: 'base.njk' pagination: data: madlibs alias: madlib size: 1 permalink: "madlibs/{{ madlib.slug | slug }}/index.html" --- <h2>{{ madlib.title }}</h2> <p><em>Instructions:</em> Fill out this form, submit it and get your story. It will hopfully make little-to-no sense. Afterward, you can save the madlib and send it to your friends.</p> <div class="madlibtext"> <a href="#" class="saver">Save it</a> {{ madlib.htmlText | safe }} </div> <h2>Form</h2> <form class="madlibForm stack"> {% for input in madlib.formFields %} <label> {{ input.displayText }} ({{ input.grammar }}) <input type="text" class="libInput" name={{input._key}}> </label> {% endfor %} <button>Done</button> </form> We can properly format the title and HTML text. We can then use the formFields array to create a form that users can enter their unique answers. There’s some additional markup for use in our JavaScript — a form button and a link to save the finalized madlib. The link and madlib text will be hidden (no peeking for our users!). For every madlib template, you created in your studio, 11ty will build a unique page. The final URLs should look like this Making The Madlibs Interactive With our madlibs generated, we need to make them interactive. We’ll sprinkle a little JavaScript and CSS to make them interactive. Before we can use CSS and JS, we need to tell 11ty to copy the static files to our built site. Copying Static Assets To The Final Build In the root of the site directory, create the following files and directories: assets/css/style.css— for any additional styling, assets/js/madlib.js— for the interactions, .eleventy.js— the 11ty configuration file. When these files are created, we need to tell 11ty to copy the assets to the final build. Those instructions live in the .eleventy.js configuration file. module.exports = function(eleventyConfig) { eleventyConfig.addPassthroughCopy("assets/"); } This instructs 11ty to copy the entire assets directory to the final build. The only necessary CSS to make the site work is a snippet to hide and show the madlib text. However, if you want the whole look and feel, you can find all the styles in this file. .madlibtext { display: none } .madlibtext.show { display: block; } Filling In The Madlib With User Input And JavaScript Any frontend framework will work with 11ty if you set up a build process. For this example, we’ll use plain JavaScript to keep things simple. The first task is to take the user data in the form and populate the generic madlib template that 11ty generated from our Sanity data. // Attach the form handler const form = document.querySelector('.madlibForm') form.addEventListener('submit', completeLib); function showText() { // Find the madlib text in the document const textDiv = document.querySelector('.madlibtext') // Toggle the class "show" to be present textDiv.classList.toggle('show') } // A function that takes the submit event // From the event, it will get the contents of the inputs // and write them to page and show the full text function completeLib(event) { // Don't submit the form event.preventDefault(); const { target } = event // The target is the form element // Get all inputs from the form in array format const inputs = Array.from(target.elements) inputs.forEach(input => { // The button is an input and we don't want that in the final data if (input.type != 'text') return // Find a span by the input's name // These will both be the _key value const replacedContent = document.getElementById(input.name) // Replace the content of the span with the input's value replacedContent.innerHTML = input.value }) // Show the completed madlib showText(); } This functionality comes in three parts: attaching an event listener, taking the form input, inserting it into the HTML, and then showing the text. When the form is submitted, the code creates an array from the form’s inputs. Next, it finds elements on the page with ids that match the input’s name — both created from the _key values of each block. It then replaces the content of that element with the value from the data. Once that’s done, we toggle the full madlib text to show on the page. We need to add this script to the page. To do this, we create a new template for the madlibs to use. In the _includes directory, create a file named lib.njk. This template will extend the base template we created and insert the script at the bottom of the page’s body. {% extends 'base.njk' %} {% block scripts %} <script> var pt = {{ madlib.text | dump | safe }} var data = { libId: `{{ madlib._id }}`, libTitle: `{{ madlib.title }}` } </script> <script src="/assets/js/madlib.js"></script> {% endblock %} Then, our madlib.njk pagination template needs to use this new template for its layout. --- layout: 'lib.njk' pagination: data: madlibs alias: madlib size: 1 permalink: "madlibs/{{ madlib.slug | slug }}/index.html" --- // page content We now have a functioning madlib generator. To make this more robust, let’s allow users to save and share their completed madlibs. Saving A User Madlib To Sanity With A Netlify Function Now that we have a madlib displayed to the user, we need to create the link for saving send the information to Sanity. To do that, we’ll add some more functionality to our front-end JavaScript. But, first, we need to add some more data pulled from Sanity into our JavaScript, so we’ll add a couple of new variables in the scripts block on the lib.njk template. {% extends 'base.njk' %} {% block scripts %} <script> // Portable Text data var pt = {{ madlib.text | dump | safe }} var data = { libId: `{{ madlib._id }}`, libTitle: `{{ madlib.title }}` } </script> <script src="/assets/js/madlib.js"></script> {% endblock %} We can write a script to send it and the user-generated answers to a serverless function to send to Sanity with that additional data. // /madlibs/site/assets/js/madlib.js // ... completeLib() async function saveLib(event) { event.preventDefault(); // Return an Map of ids and content to turn into an object const blocks = Array.from(document.querySelectorAll('.empty')).map(item => { return [item.id, { content: item.outerText }] }) // Creates Object ready for storage from blocks map const userContentBlocks = Object.fromEntries(blocks); // Formats the data for posting const finalData = { userContentBlocks, pt, // From nunjucks on page ...data // From nunjucks on page } // Runs the post data function for createLib postData('/.netlify/functions/createLib', finalData) .then(data => { // When post is successful // Create a div for the final link const landingZone = document.createElement('div') // Give the link a class landingZone.className = "libUrl" // Add the div after the saving link saver.after(landingZone) // Add the new link inside the landing zone landingZone.innerHTML = `Your url is /userlibs/${data._id}/` }).catch(error => { // When errors happen, do something with them console.log(error) }); } async function postData(url = '', data = {}) { // A wrapper function for standard JS fetch const response = await fetch(url, { method: 'POST', mode: 'cors', cache: 'no-cache', credentials: 'same-origin', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(data) }); return response.json(); // parses JSON response into native JavaScript objects } We add a new event listener to the “Save” link in our HTML. The saveLib function will take the data from the page and the user-generated data and combine them in an object to be handled by a new serverless function. The serverless function needs to take that data and create a new Sanity document. When creating the function, we want it to return the _id for the new document. We use that to create a unique link that we add to the page. This link will be where the newly generated page will be. Setting Up Netlify Dev To use Netlify Functions, we’ll need to get our project set up on Netlify. We want Netlify to build and serve from the site directory. To give Netlify this information, we need to create a netlify.toml file at the root of the entire project. [build] command = "npm run build" # Command to run functions = "functions" # Directory we store the functions publish = "_site" # Folder to publish (11ty automatically makes the _site folder base = "site" # Folder that is the root of the build To develop these locally, it’s helpful to install Netlify’s CLI globally. npm install -g netlify-cli Once that’s installed, you can run netlify dev in your project. This will take the place of running your start NPM script. The CLI will run you through connecting your repository to Netlify. Once it’s done, we’re ready to develop our first function. Creating A Function To Save Madlibs To Sanity Since our TOML file sets the functions directory to functions, we need to create the directory. Inside the directory, make a createLib.js file. This will be the serverless function for creating a madlib in the Sanity data store. The standard Sanity client we’ve been using is read-only. To give it write permissions, we need to reconfigure it to use an API read+write token. To generate a token, log into the project dashboard and go to the project settings for your madlibs project. In the settings, find the Tokens area and generate a new token with “Editor” permissions. When the token is generated, save the string to Netlify’s environment variables dashboard with the name SANITY_TOKEN. Netlify Dev will automatically pull these environment variables into the project while running. To reconfigure the client, we’ll require the file from our utilities, and then run the .config() method. This will let us set any configuration value for this specific use. We’ll set the token to the new environment variable and set useCdn to false. // Sanity JS Client // The build client is read-only // To use to write, we need to add an API token with proper permissions const client = require('../utils/sanityClient') client.config({ token: process.env.SANITY_TOKEN, useCdn: false }) The basic structure for a Netlify function is to export a handler function that is passed an event and returns an object with a status code and string body. // Grabs local env variables from .env file // Not necessary if using Netlify Dev CLI require('dotenv').config() // Sanity JS Client // The build client is read-only // To use to write, we need to add an API token with proper permissions const client = require('../utils/sanityClient') client.config({ token: process.env.SANITY_TOKEN, useCdn: false }) // Small ID creation package const { nanoid } = require('nanoid') exports.handler = async (event) => { // Get data off the event body const { pt, userContentBlocks, id, libTitle } = JSON.parse(event.body) // Create new Portable Text JSON // from the old PT and the user submissions const newBlocks = findAndReplace(pt, userContentBlocks) // Create new Sanity document object // The doc's _id and slug are based on a unique ID from nanoid const docId = nanoid() const doc = { _type: "userLib", _id: docId, slug: { current: docId }, madlib: id, title: `${libTitle} creation`, text: newBlocks, } // Submit the new document object to Sanity // Return the response back to the browser return client.create(doc).then((res) => { // Log the success into our function log console.log(`Userlib was created, document ID is ${res._id}`) // return with a 200 status and a stringified JSON object we get from the Sanity API return { statusCode: 200, body: JSON.stringify(doc) }; }).catch(err => { // If there's an error, log it // and return a 500 error and a JSON string of the error console.log(err) return { statusCode: 500, body: JSON.stringify(err) } }) } // Function for modifying the Portable Text JSON // pt is the original portable Text // mods is an object of modifications to make function findAndReplace(pt, mods) { // For each block object, check to see if a mod is needed and return an object const newPT = pt.map((block) => ({ ...block, // Insert all current data children: block.children.map(span => { // For every item in children, see if there's a modification on the mods object // If there is, set modContent to the new content, if not, set it to the original text const modContent = mods[span._key] ? mods[span._key].content : span.text // Return an object with all the original data, and a new property // displayText for use in the frontends return { ...span, displayText: modContent } }) })) // Return the new Portable Text JSON return newPT } The body is the data we just submitted. For ease, we’ll destructure the data off the event.body object. Then, we need to compare the original Portable Text and the user content we submitted and create the new Portable Text JSON that we can submit to Sanity. To do that, we run a find and replace function. This function maps over the original Portable Text and for every child in the blocks, replace its content with the corresponding data from the modfications object. If there isn’t a modification, it will store the original text. With modified Portable Text in hand, we can create a new object to store as a document in the Sanity content lake. Each document needs a unique identifier (which we can use the nanoid NPM package to create. We’ll also let this newly created ID be the slug for consistency. The rest of the data is mapped to the proper key in our userLib schema we created in the studio and submitted with the authenticated client’s .create() method. When success or failure returns from Sanity, we pass that along to the frontend for handling. Now, we have data being saved to our Sanity project. Go ahead and fill out a madlib and submit. You can view the creation in the studio. Those links that we’re generating don’t work yet, though. This is where 11ty Serverless comes in. Setting Up 11ty Serverless You may have noticed when we installed 11ty that we used a specific version. This is the beta of the upcoming 1.0 release. 11ty Serverless is one of the big new features in that release. Installing The Serverless Plugin 11ty Serverless is an included plugin that can be initialized to create all the boilerplate for running 11ty in a serverless function. To get up and running, we need to add the plugin to our .eleventy.js configuration file. const { EleventyServerlessBundlerPlugin } = require("@11ty/eleventy"); module.exports = function (eleventyConfig) { eleventyConfig.addPassthroughCopy("assets/"); eleventyConfig.addPlugin(EleventyServerlessBundlerPlugin, { name: "userlibs", // the name to use for the functions functionsDir: "./functions/", // The functions directory copy: ["utils/"], // Any files that need to be copied to make our scripts work excludeDependencies: ["./_data/madlibs.js"] // Exclude any files you don't want to run }); }; After creating this file, restart 11ty by rerunning netlify dev. On the next run, 11ty will create a new directory in our functions directory named userlibs (matching the name in the serverless configuration) to house everything it needs to have to run in a serverless function. The index.js file in this directory is created if it doesn’t exist, but any changes you make will persist. We need to make one small change to the end of this file. By default, 11ty Serverless will initialize using standard serverless functions. This will run the function on every load of the route. That’s an expensive load for content that can’t be changed after it’s been generated. Instead, we can change it to use Netlify’s On-Demand Builders. This will build the page on the first request and cache the result for any later requests. This cache will persist until the next build of the site. To update the function, open the index.js file and change the ending of the file. // Comment this line out exports.handler = handler // Uncomment these lines const { builder } = require("@netlify/functions"); exports.handler = builder(handler); Since this file is using Netlify’s functions package, we also need to install that package. npm install @netlify/functions Creating A Data File For User-generated Madlibs Now that we have an On-Demand Builder, we need to pull the data for user-generated madlibs. We can create a new JavaScript data file in the _data file named userlibs.js. Like our madlibs data file, the file name will be the key to get this data in our templates. // /madlibs/site/_data/userlibs.js const client = require('../utils/sanityClient') const {prepText} = require('../utils/portableTextUtils') const query = `*[_type == "userLib"]{ title, "slug": slug.current, text, _id }` module.exports = async function() { const madlibs = await client.fetch(query); // Protect against no madlibs returning if (madlibs.length == 0) return {"404": {}} // Run through our portable text serializer const preppedMadlib = madlibs.map(prepText) // Convert the array of documents into an object // Each item in the Object will have a key of the item slug // 11ty's Pagination will create pages for each one const mapLibs = preppedMadlib.map(item => ([item.slug, item])) const objLibs = Object.fromEntries(mapLibs) return objLibs } This data file is similar to what we wrote earlier, but instead of returning the array, we need to return an object. The object’s keys are what the serverless bundle will use to pull the correct madlib on request. In our case, we’ll make the item’s slug the key since the serverless route will be looking for a slug. Creating A Pagination Template That Uses Serverless Routes Now that the plugin is ready, we can create a new pagination template to use the generated function. In the root of our site, add a userlibs.njk template. This template will be like the madlibs.njk template, but it will use different data without any interactivity. --- layout: 'base.njk' pagination: data: userLibs alias: userlib size: 1 serverless: eleventy.serverless.path.slug permalink: userlibs: "/userlibs/:slug/" --- <h2>{{ userlib.title }}</h2> <div> {{ userlib.htmlText | safe }} </div> In this template, we use base.njk to avoid including the JavaScript. We specify the new userlibs data for pagination. To pull the correct data, we need to specify what the lookup key will be. On the pagination object, we do this with the serverless property. When using serverless routes, we get access to a new object: eleventy.serverless. On this object, there’s a path object that contains information on what URL the user requested. In this case, we’ll have a slug property on that object. That needs to correspond to a key on our pagination data. To get the slug on our path, we need to add it to the permalink object. 11ty Serverless allows for more than one route for a template. The route’s key needs to match the name provided in the .eleventy.js configuration. In this case, it should be userlibs. We specify the static /userlibs/ start to the path and then add a dynamic element: :slug/. This slug will be what gets passed to eleventy.serverless.path.slug. Now, the link that we created earlier by submitting a madlib to Sanity will work. Next Steps Now we have a madlib generator that saves to a data store. We build only the necessary pages to allow a user to create a new madlib. When they create one, we make those pages on-demand with 11ty and Netlify Functions. From here, we can extend this further. - Statically build the user-generated content as well as render them on request. - Create a counter for the total number of madlibs saved by each madlib template. - Create a list of words users use by parts of speech. When you can statically build AND dynamically render, what sorts of applications does this open up?
https://www.smashingmagazine.com/2021/10/static-first-madlib-generator-portable-text-netlify-builder-functions/
CC-MAIN-2021-43
refinedweb
6,364
64.91
Homework 1 Due by 11:59pm on Wednesday, 9/2 Instructions. See Lab 1 for instructions on submitting assignments. Using OK: If you have any questions about using OK, please refer to this guide. Readings: You might find the following references useful: Required questions Question 1 We've seen that we can give new names to existing functions. Fill in the blanks in the following function definition for adding a to the absolute value of b, without calling abs. from operator import add, sub def a_plus_abs_b(a, b): """Return a+abs(b), but without calling abs. >>> a_plus_abs_b(2, 3) 5 >>> a_plus_abs_b(2, -3) 5 """ if b < 0: f = _____ else: f = _____ return f(a, b) Use OK to test your code: python3 ok -q a_plus_abs_b Question 2 Write a function that takes three positive numbers and returns the sum of the squares of the two largest numbers. Use only a single expression for the body of the function. def two_of_three(a, b, c): """Return x*x + y*y, where x and y are the two largest members of the positive numbers a, b, and c. >>> two_of_three(1, 2, 3) 13 >>> two_of_three(5, 3, 1) 34 >>> two_of_three(10, 2, 8) 164 >>> two_of_three(5, 5, 5) 50 """ "*** YOUR CODE HERE ***" Use OK to test your code: python3 ok -q two_of_three Question 3 Write a function that takes an integer n greater than 1 and returns the largest integer smaller than n that evenly divides n*n-1. Hint: To check if b evenly divides a, you can use the expression a % b == 0, which can be read as, "the remainder of dividing a by b is 0." However, it is possible to solve this problem without any if or while statements. def largest_factor(n): """Return the largest factor of n*n-1 that is smaller than n. >>> largest_factor(4) # n*n-1 is 15; factors are 1, 3, 5, 15 3 >>> largest_factor(9) # n*n-1 is 80; factors are 1, 2, 4, 5, 8, 10, ... 8 """ "*** YOUR CODE HERE ***" Use OK to test your code: python3 ok -q largest_factor Question 4 Let's try to write a function that does the same thing as an if statement. def if_function(condition, true_result, false_result): """Return true_result if condition is a true value, and false_result otherwise. >>> if_function(True, 2, 3) 2 >>> if_function(False, 2, 3) 3 >>> if_function(3==2, 3+2, 3-2) 1 >>> if_function(3>2, 3+2, 3-2) 5 """ if condition: return true_result else: return false_result Despite the doctests above, this function actually does not do the same thing as an if statement in all cases. To prove this fact, write functions c, t, and f such that with_if_statement. Question nis 1. The number n will travel up and down but eventually end at 1 (at least for all numbers that have ever been tried — nobody has ever proved that the sequence will terminate). Analogously, a hailstone travels up and down in the atmosphere before eventually landing on earth. The sequence of values of n is often called a Hailstone sequence, because hailstones also travel up and down in the atmosphere before falling to earth. Write a function that takes a single argument with formal parameter name n, prints out the hailstone sequence starting at n, and returns the number of steps in the sequence: def hailstone(n): """Print the hailstone sequence starting at n and return its length. >>> a = hailstone(10) 10 5 16 8 4 2 1 >>> a 7 """ "*** YOUR CODE HERE ***" Hailstone sequences can get quite long! Try 27. What's the longest you can find? Use OK to test your code: python3 ok -q hailstone Extra questions Extra questions are not worth extra credit and are entirely optional. They are designed to challenge you to think creatively! Question 6. Place your solution in the multi-line string named challenge_question_program. Note: No tests will be run on your solution to this problem.
http://inst.eecs.berkeley.edu/~cs61a/fa15/hw/hw01/
CC-MAIN-2018-05
refinedweb
655
66.98
First, yes I have seen this question: The answers there are incorrect and do not work. I have voted and commented accordingly. The processes I want to kill look like this when listed with ps aux | grep page.py: ps aux | grep page.py apache 424 0.0 0.1 6996 4564 ? S 07:02 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 2686 0.0 0.1 7000 3460 ? S Sep10 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 2926 0.0 0.0 6996 1404 ? S Sep02 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 7398 0.0 0.0 6996 1400 ? S Sep01 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 9423 0.0 0.1 6996 3824 ? S Sep10 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 11022 0.0 0.0 7004 1400 ? S Sep01 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 15343 0.0 0.1 7004 3788 ? S Sep09 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 15364 0.0 0.1 7004 3792 ? S Sep09 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 15397 0.0 0.1 6996 3788 ? S Sep09 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 16817 0.0 0.1 7000 3788 ? S Sep09 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 17590 0.0 0.0 7000 1432 ? S Sep07 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 24448 0.0 0.0 7000 1432 ? S Sep07 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py apache 30361 0.0 0.1 6996 3776 ? S Sep09 0:00 /usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py I'm looking to setup a simple daily cron that will find and kill any page.py processes older than an hour. page.py The accepted answer on the aforementioned question does not work, as it doesn't match a range of times, it simply matches processes that have been running from 7 days to 7 days 23 hours 59 minutes and 59 seconds. I don't want to kill processes that have been running from 1-2 hours, but rather anything greater than 1 hour. The other answer to the aforementioned question using find does not work, at least not on Gentoo or CentOS 5.4, it either spits out a warning, or returns nothing if the advice of said warning is followed. find GNU Killall can kill processes older than a given age, using their processname. if [[ "$(uname)" = "Linux" ]];then killall --older-than 1h page.py;fi Thanks to Christopher's answer I was able to adapt it to the following: find /proc -maxdepth 1 -user apache -type d -mmin +60 -exec basename {} \; \ | xargs ps | grep page.py | awk '{ print $1 }' | sudo xargs kill -mmin was the find command I was missing. -mmin find /proc -maxdepth 1 -type d -name 1 -mmin +60 -ls I think you can modify some of those previous answers to fit your needs. Namely: for FILE in (find . -maxdepth 1 -user processuser -type d -mmin +60) do kill -9 $(basename $FILE) # I can never get basename to work with find's exec. Let me know if you know how! done Or ps -eo pid,etime,comm | awk '$2!~/^..:..$/ && $3~/page\.py/ { print $1}' | kill -9 I think the second may best fit your needs. The find version would wind up nuking other processes by that user --Christopher Karel kill -9 -SIGINT -SIGTERM find doesnt always work, not every system has etimes available, and it might be my regex newb status, but I dont think you need anything more than this: ps -eo pid,etime,comm,user,tty | grep builder | grep pts | grep -v bash |awk '$2~/-/ {if ($2>7) print $1}' you can then pipe that to kill or whatever your need may be. ps grep awk # get elapsed time in seconds, filter our only those who >= 3600 sec ps axh -O etimes | awk '{if ($2 >= 3600) print $2}' If you wan't you can feed ps with list of PIDs to lookup within, for e. g.: ps h -O etimes 1 2 3 I modified the answer they gave you in previous post ps -eo pid,etime,comm | egrep '^ *[0-9]+ +([0-9]+-[^ ]*|[0-9]{2}:[0-9]{2}:[0-9]{2}) +/usr/bin/python2.6 /u/apps/pysnpp/current/bin/page.py' | awk '{print $1}' | xargs kill The regular expression searches for 2 types of second argument: Hours:minutes:seconds That should match everything except young processes who would have the form minutes:seconds. minutes:seconds This is probably overkill, but I got curious enough to finish it and test that it works (on a different process name on my system, of course). You can kill the capturing of $user and $pid to simplify the regexp, which I only added for debugging, and didn't feel like ripping back out. Named captures from perl 5.10 would shave off a couple more lines, but this should work on older perls. $user $pid You'll need to replace the print with a kill, of course, but I wasn't about to actually kill anything on my own system. #!/usr/bin/perl -T use strict; use warnings; $ENV{"PATH"} = "/usr/bin:/bin"; my (undef,undef,$hour) = localtime(time); my $target = $hour - 2; # Flag process before this hour my $grep = 'page.py'; my @proclist = `ps -ef | grep $grep`; foreach my $proc (@proclist) { $proc =~ /(\w+)\s+(\d+)\s+\d+\s+\d+\s+(.*?).*/; my $user = $1; my $pid = $2; my $stime = $3; $stime =~ s/(\d+):(\d+)/$1/; # We're going to do a numeric compare against strings that # potentially compare things like 'Aug01' when the STIME is old # enough. We don't care, and we want to catch those old pids, so # we just turn the warnings off inside this foreach. no warnings 'numeric'; unless ($stime > $target) { print "$pid\n"; } } I have a server with wrong dates in /proc and find doesn't work so I wrote this script: #!/bin/bash MAX_DAYS=7 #set the max days you want here MAX_TIME=$(( $(date +'%s') - $((60*60*24*$MAX_DAYS)) )) function search_and_destroy() { PATTERN=$1 for p in $(ps ux|grep "$PATTERN"|grep -v grep| awk '{ print $2 }') do test $(( $MAX_TIME - $(date -d "`ps -p $p -o lstart=`" +'%s') )) -ge 0 && kill -9 $p done } search_and_destroy " command1 " search_and_destroy " command2 " Python version using the ctime of the process entries in /proc: /proc #!/usr/bin/env python # -*- coding: utf-8 -*- # kills processes older than HOURS_DELTA hours import os, time SIGNAL=15 HOURS_DELTA=1 pids = [int(pid) for pid in os.listdir('/proc') if pid.isdigit()] for pid in pids: if os.stat(os.path.join('/proc', str(pid))).st_ctime < time.time() - HOURS_DELTA * 3600: try: os.kill(pid, SIGNAL) except: print "Couldn't kill process %d" % pid The lstart field in ps gives a consistent time format which we can feed to date to convert to seconds since the epoch. Then we just compare that to the current time. lstart date #!/bin/bash current_time=$(date +%s) ps axo lstart=,pid=,cmd= | grep page.py | while read line do # 60 * 60 is one hour, multiply additional or different factors for other thresholds if (( $(date -d "${line:0:25}" +%s) < current_time - 60 * 60 )) then echo $line | cut -d ' ' -f 6 # change echo to kill fi done I use this simple script it takes two arguments name of process and age in seconds. #!/bin/bash # first argument name of the process to check # second argument maximum age in seconds # i.e kill lighttpd after 5 minutes # script.sh lighttpd 300 process=$1 maximum_runtime=$2 pid=`pgrep $process` if [ $? -ne 0 ] then exit 0 fi process_start_time=`stat /proc/$pid/cmdline --printf '%X'` current_time=`date +%s` let diff=$current_time-$process_start_time if [ $diff -gt $maximum_runtime ] then kill -3 $pid fi 72=3days 48=2days 24=1day a1=$(TZ=72 date +%d) ps -ef| cat filex.txt | sed '/[JFMASOND][aepuco][nbrylgptvc] '$a1'/!d' | awk '{ print $2 " " $5 " " $6 }' > file2.txt it works :) By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 12555 times active 27 days ago
http://serverfault.com/questions/181477/how-do-i-kill-processes-older-than-t/181503
CC-MAIN-2015-22
refinedweb
1,421
75.61
On Mon, Sep 30, 2013 at 09:12:35PM +0900, Yuto KAWAMURA wrote: > 2013/9/20 Daniel P. Berrange <berrange redhat com>: > > On Thu, Sep 19, 2013 at 11:26:08PM +0900, Yuto KAWAMURA(kawamuray) wrote: > >> diff --git a/tools/wireshark/src/moduleinfo.h b/tools/wireshark/src/moduleinfo.h > >> new file mode 100644 > >> index 0000000..9ab642c > >> --- /dev/null > >> +++ b/tools/wireshark/src/moduleinfo.h > >> @@ -0,0 +1,37 @@ > >> +/* moduleinfo.h --- Define constants about wireshark plugin module > > ... > >> + > >> +/* Included *after* config.h, in order to re-define these macros */ > >> + > >> +#ifdef PACKAGE > >> +# undef PACKAGE > >> +#endif > >> + > >> +/* Name of package */ > >> +#define PACKAGE "libvirt" > > > > Huh ? "PACKAGE" will already be defined to 'libvirt' so why are > > you redefining it. > > > >> + > >> + > >> +#ifdef VERSION > >> +# undef VERSION > >> +#endif > >> + > >> +/* Version number of package */ > >> +#define VERSION "0.0.1" > > > > This means the wireshark plugin will have a fixed version, even > > when libvirt protocol changes in new releases. This seems bogus. > > Again I think we should just use the existing defined "VERSION". > > > > I think this whole file can just go away completely > > > > Right. I'll remove whole moduleinfo.h. > > >> diff --git a/tools/wireshark/src/packet-libvirt.c b/tools/wireshark/src/packet-libvirt.c > >> new file mode 100644 > >> index 0000000..cd3e6ce > >> --- /dev/null > >> +++ b/tools/wireshark/src/packet-libvirt.c > >> +static gboolean > >> +dissect_xdr_bytes(tvbuff_t *tvb, proto_tree *tree, XDR *xdrs, int hf, > >> + guint32 maxlen) > >> +{ > >> + goffset start; > >> + guint8 *val = NULL; > >> + guint32 length; > >> + > >> + start = xdr_getpos(xdrs); > >> + if (xdr_bytes(xdrs, (char **)&val, &length, maxlen)) { > >> + proto_tree_add_bytes_format_value(tree, hf, tvb, start, xdr_getpos(xdrs) - start, > >> + NULL, "%s", format_xdr_bytes(val, length)); > >> + /* Seems I can't call xdr_free() for this case. > >> + It will raises SEGV by referencing out of bounds argument stack */ > >> + xdrs->x_op = XDR_FREE; > >> + xdr_bytes(xdrs, (char **)&val, &length, maxlen); > >> + xdrs->x_op = XDR_DECODE; > > > > Is accessing the internals of the 'XDR' struct really portable ? I think > > it would be desirable to solve the xdr_free problem rather than accessing > > struct internals > > > > I'll change this to use free(), but let me explain this problem detail. > > xdr_bytes may raises SEGV when it called from xdr_free. > This is caused by xdr_free is accessing it's third argument 'sizep' even if > it was called from xdr_free(in other word, when xdrs->x_op is XDR_FREE). > This problem can't be reproduced in 64bit architecture due to 64bit > system's register usage (I'll explain about this later). > > Following is a small enough code to reproduce this issue: > > #include <stdio.h> > #include <stdlib.h> > #include <rpc/xdr.h> > > /* Contents of this buffer is not important to reproduce the issue */ > static char xdr_buffer[] = { > 0x00, 0x00, 0x00, 0x02, /* length is 2byte */ > 'A', '\0', 0, 0 /* 2 byte data and 2 byte padding bytes */ > }; > > /* Same as the prototype of xdr_bytes() */ > bool_t my_xdr_bytes(XDR *xdrs, char **cpp, u_int *sizep) > { > return TRUE; > } > > /* Same as the prototype of xdr_free() */ > void my_xdr_free(xdrproc_t proc, char *objp) > { > XDR x; > (*proc)(&x, objp, NULL/* This NULL stored at same pos of 'sizep' > in xdr_bytes() */); > } > > int main(void) > { > XDR xdrs; > char *opaque = NULL; > unsigned int size; > > xdrmem_create(&xdrs, xdr_buffer, sizeof(xdr_buffer), XDR_DECODE); > if (!xdr_bytes(&xdrs, &opaque, &size, ~0)) { > fprintf(stderr, "xdr_bytes() returns FALSE\n"); > exit(1); > } > > /* Reproduce same stack-upping as call of xdr_free(xdr_bytes, &opaque). > This is needed to stack-up 0x0(invalid address) on position of > 'sizsp' which is third argument of xdr_bytes(). */ > my_xdr_free((xdrproc_t)my_xdr_bytes, (char *)&opaque); > > /* *** SEGV!! *** */ > xdr_free((xdrproc_t)xdr_bytes, (char *)&opaque); > /* ************** */ Ok, the scenario here is - 'xdr_bytes' takes 4 arguments - 'xdrproc_t' takes 2 mandatory args + var-args - 'xdr_free' calls the 'xdrproc_t' function with only 2 arguments - 'xdr_bytes' unconditionally accesses its 3rd argument So either - the cast from xdr_bytes -> xdrproc_t is invalid and thus xdr_bytes should not be used with xdr_free. or - xdr_bytes impl in glibc is buggy and shouldn't access the 3rd arg except when doing encode/decode operations. Regardless of which is right, we want our code to work on all xdr impls, so we must avoid problem 2. So I think we should just not use xdr_free here. Just do a plain 'free(opaque)' instead. Daniel -- |: -o- :| |: -o- :| |: -o- :| |: -o- :|
https://www.redhat.com/archives/libvir-list/2013-September/msg01749.html
CC-MAIN-2015-32
refinedweb
661
64.41
Game Mechanics in Phaser: Simple Helicopter Obstacle Course By Josh Morony This will be the first part in a new series I plan on creating for creating HTML5 games with the Phaser framework. We will be focusing on how to create bare-bones versions of common game mechanics with Phaser. The same basic game mechanics can be used to create a multitude of different styles of games. Take puzzle games like Bejeweled or Candy Crush, many of these games use the exact same mechanics (with some differences of course), they just have a different coat of paint over those core mechanics. Running platformers is another good example, the core game mechanic is a side-scrolling environment where a player can jump between platforms. Before adding things like animated sprites, backgrounds, and sound effects to games, I like to get the basic mechanics working in the simplest way possible. This usually means using basic squares for the player’s character and the environment. The bells and whistles like great artwork and sounds are what make a game great, but it takes a lot of time to get these things right. By focusing on just the core mechanics first, you will quickly be able to tell if the mechanics are fun or not before investing too much time. In this tutorial, we are going to build the basic mechanics for a helicopter style game. I remember spending way too much school time playing this game as a kid. Here’s what it will look like when we are done: >>IMAGE. You should also have a basic understanding of how Phaser games work before attempting this tutorial, as I will not be covering all of the basics here. If you are not familiar with Phaser, I do have more tutorials available. 1. Generate a New Phaser Project We will start off by generating a new Phaser game based on the ES6 template with the following command: git clone phaser-helicopter Once that has finished downloading you should make that game your current working directory: cd phaser-helicopter and then install all of the dependencies with the following command: npm install to view your game at any time you can run the following command: npm start and then go to the following address in your browser: 2. Create the Helicopter We are going to start off by creating an object for our Helicopter. It doesn’t necessarily have to be a helicopter, it could just as easily be a bird or a superhero depending on what sprite you end up using, but we are going to refer to it as a helicopter. The Helicopter object will be responsible for handling everything related to the helicopter, and we will import it into our Main state to use in the game. If you’re using the phaser boilerplate I linked above you will already have an ExampleObject set up, so we are just going to reuse that. Rename src/objects/ExampleObject.js to src/objects/Helicopter.js and add the following: class Helicopter { constructor(game){ this.game = game; this.isRising = false; this.sprite = null; } spawn(){ let helicopterSprite = new Phaser.Graphics(this.game) .beginFill(Phaser.Color.hexToRGB('#2c3e50'), 1) .drawRect(0, 0, 100, 100); let helicopterSpriteTexture = helicopterSprite.generateTexture(); this.sprite = this.game.add.sprite(this.game.world.centerX, this.game.world.centerY, helicopterSpriteTexture); this.game.physics.arcade.enable(this.sprite); this.sprite.enableBody = true; this.sprite.body.gravity.y = 5000; this.sprite.body.velocity.y = -1500; this.sprite.body.collideWorldBounds = false; this.sprite.anchor.setTo(0.5, 0.5); } setRising(){ this.isRising = true; } setFalling(){ this.isRising = false; } increaseVerticalVelocity(){ this.sprite.body.velocity.y -= 200; } isOutOfBounds(){ let position = this.sprite.body.position.y; return position > this.game.world.height || position < 0; } } export default Helicopter; The spawn method is what will handle adding the helicopter to the game. Instead of using an image for the sprite, we first programmatically create our own texture by using the Phaser Graphics library. This allows us to create shapes just like we would with an HTML5 canvas, so we don’t need to worry about creating an image and loading it in, we can just create a simple square. We then use that texture when creating the sprite. We also set some physics up on the sprite so that it will fall to the ground by default, and we give it an initial upwards velocity to give the player some time to react before it falls and dies. We also set up a few helper methods in here that we will reference in our Main state. We are able to set whether the helicopter is rising or not (this will be based on whether the user is currently clicking or tapping the screen), we have a method to increase the helicopters upwards velocity and a method to check if the helicopter is out of bounds (above or below the edge of the screen). 3. Add Controls for the Helicopter Now that we have our helicopter object created, we are going to import it into our Main state and set up some controls for it. Modify src/states/Main.js to reflect the following: import Helicopter from 'objects/Helicopter'; class Main extends Phaser.State { create() { //Enable Arcade Physics this.game.physics.startSystem(Phaser.Physics.ARCADE); //Set the games background colour this.game.stage.backgroundColor = '#cecece'; this.helicopter = new Helicopter(this.game); this.helicopter.spawn(); this.addControls(); } update() { //); } gameOver(){ this.game.state.restart(); } } export default Main; We import the Helicopter class at the top of the file, and then we create a new object using it in the create method. We then call its spawn method to add it to the game. At this point, the helicopter will be added to the screen, but it will just immediately fall to the bottom because we have no way to control it. So, we add a call to addControls where we set up an onDown and onUp event listener. When the user taps or clicks, the helicopter will have its isRising value set to true, and when the user lets go the value will be set back to false. We use this value in the update method to decide whether or not we call the helicopter’s increaseVerticalVelocity method. The end result is that when a user is tapping or clicking the helicopter will rise, and when they let go it will fall (due to the gravity we set earlier). We also make a call to the isOutOfBounds method to see if the helicopter is within the bounds of the game or not, and if it isn’t, we trigger the gameOver method which will restart the game. Here’s what you should have currently: 4. Create the Obstacles We’ve got the mechanics for the helicopter working quite well, but we’re still missing a key component of the core game mechanic and that’s some obstacles to dodge. We’re going to add that now by using a similar approach to what we did with the helicopter. We will be creating a MovingWalls object. Add a file at src/objects/MovingWalls.js and add the following: class MovingWalls { constructor(game){ this.game = game; this.wallGroup = null; this.spriteGroup = null; this.wallSpeed = 300; let seed = Date.now(); this.random = new Phaser.RandomDataGenerator([seed]); this.initWalls(); } initWalls(){ this.wallHeight = this.random.integerInRange(20, this.game.world.height / 3); this.wallWidth = 200; let wallSprite = new Phaser.Graphics(this.game) .beginFill(Phaser.Color.hexToRGB('#e74c3c'), 1) .drawRect(0, 0, this.wallWidth, this.wallHeight); let wallSpriteTexture = wallSprite.generateTexture(); this.spriteGroup = this.game.add.group(); this.spriteGroup.enableBody = true; this.spriteGroup.createMultiple(10, wallSpriteTexture); } spawn(){ let wall = this.spriteGroup.getFirstDead(); wall.body.gravity.y = 0; wall.reset(this.game.world.width, this.random.integerInRange(0, this.game.world.height)); wall.body.velocity.x = -this.wallSpeed; wall.body.immovable = true; //When the block leaves the screen, kill it wall.checkWorldBounds = true; wall.outOfBoundsKill = true; } } export default MovingWalls; This is a very similar idea to the Helicopter except for a couple key differences. Instead of spawning a single sprite, we are creating a group of sprites, so we initially create 10 sprites in a group using: this.spriteGroup.createMultiple(10, wallSpriteTexture); then when we need to spawn a wall, we just use one of the sprites in this group that aren’t currently being used by using the getFirstDead method. This allows us to recycle our sprites rather than constantly creating new ones, which is a lot better for performance. To spawn a wall, we just grab one of our unused sprites and reset its position to the right side of the world. We give it a negative velocity so that it travels to the left of the screen which is what simulates movement in our game (our helicopter doesn’t move to the right, the whole game world moves around the helicopter to the left). We also need to make sure to set the checkWorldBounds and outOfBoundsKill properties so that the walls are killed when they leave the game space (if we don’t kill them, then we can’t recycle them). Now we just need to make use of these walls in our Main state. Modify src/states/Main.js to reflect the following: import Helicopter from 'objects/Helicopter'; import MovingWalls from 'objects/MovingWalls'; class Main extends Phaser.State { create() { //Enable Arcade Physics this.game.physics.startSystem(Phaser.Physics.ARCADE); //Set the games background colour this.game.stage.backgroundColor = '#cecece'; this.helicopter = new Helicopter(this.game); this.helicopter.spawn(); this.walls = new MovingWalls(this.game); this.addControls(); this.addTimers(); } update() { this.game.physics.arcade.overlap(this.helicopter.sprite, this.walls.spriteGroup, this.gameOver, null, this); //); } addTimers(){ this.game.time.events.loop(2000, this.walls.spawn, this.walls); } gameOver(){ this.game.state.restart(); } } export default Main; We’ve added an addTimers method that will set up a loop that will call the walls spawn method once every 2 seconds. We also add an overlap check in the update method that will detect collisions between the helicopter and any of the walls. If a collision occurs, then we call the gameOver method. The game should now look like this: Summary This game is far from being completed, but the core game mechanics are there now. You could quickly make the game look a lot more interesting by adding sprites for the helicopter and for the walls, and by adding a more interesting background. You might also want to add more bells and whistles, like music, collision sound effects, power-ups the player can collect, and so on. These things can be added incrementally, all while maintaining a working game. I find this to be a much more manageable approach rather than trying to add in everything right from the start.
https://www.joshmorony.com/game-mechanics-in-phaser-simple-helicopter-obstacle-course/
CC-MAIN-2020-10
refinedweb
1,797
56.05
2013-08-08 Linking and calling Rust functions from C At a recent functional programming meetup I was discussing with a colleague about how nice it would be to be able to use Rust in Gecko. This made me curious if it was possible to implement libraries in Rust and call them from C. After the meeting I asked in #rust and got pointed to some projects that showed the way. This lead me to trying to come up with as simple example as possible of compiling a Rust file into an object file, linking it to a C program and running it without the Rust runtime. The code is in my rust-from-c-example github repository. It can be cloned and built with: $ git clone $ cd rust-from-c-example $ make $ ./test To avoid issues with integrating with the Rust runtime I've opted to not use it. This means no threads and limits the standard library usage. This example is very simple, only demonstrating adding two numbers. Extending from this will be an interesting exercise to see how much Rust can be used. The Rust code is: #[crate_type = "lib"]; #[no_std]; #[allow(ctypes)]; #[no_mangle] pub extern fn add(lhs: uint, rhs: uint) -> uint { lhs + rhs } The first three lines ensure that the file is compiled as a library, does not use the standard library and can use C types. The no_mangle declaration stops the Rust default of mangling function names to include their module and version information. This means that add in Rust is exported as add for C programs. The extern makes the function available from C and defaults to cdecl calling format. To generate a .o file that can be linked into a C program: $ rustc -c mylib.rs The C program creates an extern declaration for add and calls it: #include <stdio.h> extern unsigned int add(unsigned int lhs, unsigned int rhs); int main() { printf("add(40,2) = %u\n", add(40,2)); return 0; } Unfortunately we can't just compile and link with the mylib.o file. This results in a linker error: mylib.o: In function `add': mylib.rc:(.text+0x4f): undefined reference to `upcall_call_shim_on_rust_stack' collect2: error: ld returned 1 exit status Some searching pointed me to armboot which had a stub implementation for this in zero.c. Compiling and linking to that worked successfully. A cut down variant of zero.c is included in the project. There's a bunch of limitations with this approach. We're basically using Rust as a higher level C. This post on embedding Rust in Ruby details some of the limitations: When calling Rust code you will not be executing in a Rust task and will not have access to any runtime services that require task-local resources. Currently this means you can't use the local heap, nor can you spawn or communicate with tasks, nor call fail!() to unwind the stack. I/O doesn't work because core::io (unfortunately, and incorrectly) uses @-boxes. Even logging does not work. Calling any code that tries to access the task context will cause the process to abort. Because code is not executing in a task, it does not grow the stack, and instead runs on whatever stack the foreign caller was executing on. Recurse too deep and you will scribble on random memory. Hopefully some of these limitations will go away or 'zero runtime' libraries will appear to make this sort of usage easier. Some resources that helped putting this together were:
https://bluishcoder.co.nz/2013/08/08/linking_and_calling_rust_functions_from_c.html
CC-MAIN-2019-13
refinedweb
587
64.41
Years ago, I worked on a very large Ruby on Rails codebase that used constants to hold lists of credit card transaction states. For example: class Txn ACTIONABLE_STATES = [:authenticated, :to_settle] DONE_STATES = [:settled, :declined] end However, we had a bug where a settled transaction would return true when txn.state.in… Assigning a static reference to an instance method call could be perilous. Let’s take a look at an example Java class to examine why: /** * Foo.java */ public class Foo { public static String foo = Config.getInstance().getFoo(); } Seems pretty innocuous in itself, but ostensibly we’re just assigning static… Getting your blog post to play nicely with Safari Reader—my preferred way of consuming blogs—isn’t always obvious. This post documents my findings in how to optimize a blog for Safari Reader if you’re publishing outside of Medium (e.g. Jekyll). <article>tag Safari Reader will look for a couple of container tags, but… There’s been some significant infrastructure changes under the hood of Quasars and I wanted to talk about them here. In another blog post I had talked about how Quasars uses Blue/Green Deployments, but as of today, that is no longer the case. In reality, very little has changed. … I am the creator and administrator of quasa.rs, a social link-sharing web application (like reddit or hackernews) for astrophysics. It’s a fun side project that keeps me from getting rusty with Ruby on Rails because, sadly, I don’t use Ruby for my day job anymore. Plus I get to… How likely it is for each digit of pi to appear? Let’s find out by charting the digits of pi into a frequency graph. The more data I can collect, the more apparent patterns (if any) will appear….
https://p16n.medium.com/?source=follow_footer-----3afe7df465b0--------------------------------
CC-MAIN-2021-43
refinedweb
298
62.27
NAMEregcomp, regexec, regerror, regfree - POSIX regex functions SYNOPSIS #include <regex.h> int regcomp(regex_t *restrict preg, const char *restrict regex,); DESCRIPTION compiled is the bitwise-or of zero or more of the following flags: -). -Unless ('\0').BR - Invalid use of back reference operator. - REG_BADPAT - Invalid use of pattern operators such as group or list. - REG_BADRPT - Invalid use of repetition operators such as using '*'For an explanation of the terms used in this section, see attributes(7). CONFORMING TOPOSIX.1-2001, POSIX.1-2008. EXAMPLES #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <regex.h> #define ARRAY_SIZE(arr) (sizeof((arr)) / sizeof((arr)[0])) static const char *const str = "1) John Driverhacker;\n2) John Doe;\n3) John Foo;\n"; static const char *const re = "John.*o"; int main(void) { static const char *s = str; regex_t regex; regmatch_t pmatch[1]; regoff_t off, len; if (regcomp(®ex, re, REG_NEWLINE)) exit(EXIT_FAILURE); printf("String = \"%s\"\n", str); printf("Matches:\n"); for (int i = 0; ; i++) { if (regexec(®ex, s, ARRAY_SIZE(pmatch), pmatch, 0)) break; off = pmatch[0].rm_so + (s - str); len = pmatch[0].rm_eo - pmatch[0].rm_so; printf("#%d:\n", i); printf("offset = %jd; length = %jd\n", (intmax_t) off, (intmax_t) len); printf("substring = \"%.*s\"\n", len, s + pmatch[0].rm_so); s += pmatch[0].rm_eo; } exit(EXIT_SUCCESS); } SEE ALSOgrep(1), regex(7) The glibc manual section, Regular Expressions
https://man.archlinux.org/man/regex.3.en
CC-MAIN-2021-21
refinedweb
224
51.95
Search the Community Showing results for 'barba'.? Issue with GSAP 3 & BarbaJS when page loads to itself digitalfunction replied to digitalfunction's topic in GSAPHere is the link to the codepen project files - Again the demo is static, but the site I am working on is dynamic so the different page types will have different animations - so if you navigate from one subpage to another subpage (namespace = the same name) everything should reset or recalculate so the animations start all over from scratch. Same concept for portfolio page types to the next portfolio page type. I am not sure if my issue is with gsap3 or barba. I am basically trying to reset/refresh/recalculate all the animations to start from scratch when you go back and forth between the 2 pages. When the page initially loads and leaves all the animation trigger - main container. black background, the page title, then the content. It all animates out as I want it, but the next container only animates the main container and everything beyond that does not. I have tried everything - kill, restart, to, from, fromTo etc... I can get the main container within the barba wrapper to animate between the pages, but data-barba="container" data-barba-namespace="subpage" does not reset. Once it goes to the next page/container the timeline inline styles are removed from the data-barba-namspace="subpage" containers so it does not animate. Not sure why this is happening. GSAP - Again, I have tried just about everything - I am novice to JavaScript so having a hard time figuring this out. Barba - tried "views", different transition namespaces, all the different hooks - just not sure where to put the kill or reset or whatever I am missing to make it reset all the timeline values on the next container. I am hoping someone can help. PLEASE and thank you for your time. Let me know what else you need. limbo replied to vladdesign's topic in GSAPWhen working with GSAP and Barba I've found the best approach is to wrap your gsap timelines in functions and call them on barba transitions/views. You tried that? Something like: function pageTransition() { // your splittext / other onload stuff } Then: barba.init({ // barba configurations here transitions: [{ // or 'views' depending on your use case async enter(data) { pageTransition() console.log('Transitions: Enter'); }, }] }); // barba.init Splittext weird breaks - GSAP + Barba Cassie replied to vladdesign's topic in GSAPHi there @vladdesign, It's hard to debug without a look at the barba.js issue but it sounds like it's down to how barba.js is replacing/loading the DOM This thread has links to a few previous forum posts that may prove useful - Good luck with your project - I hope this helps.. - Good afternoon. I managed to make a transition animation when clicking on "Back", but when clicking on a link with index.html it is not possible to implement flip. In Slack Barba, only one person answered me, but he says that barba is not very good... But it turns out to make an animation in the cover__div block... P.S.: codesandbox why doesn't it work out the transition here... Maybe there is really some option with another plugin for the transition? 😒 - It might have something to do with the fact that it has redirect to get the latest version, so it will return this. It might be swapping the instance out in the middle of the transition. - It's really not working all the way. If you change the duration, it won't complete. It should wait 10 seconds before showing the next page. Locally it will work fine. It's just something with codesandbox's environment. 🤷♂️ barba.init({ transitions: [ { name: "opacity-transition", leave(data) { return gsap.to(data.current.container, { opacity: 0.2, duration: 10 }); }, enter(data) { return gsap.from(data.next.container, { opacity: 0.8, duration: 10 }); } } ] }); - Very interesting, I realized that js barba periodically falls off when it is connected like this: But when I saved it and connected it locally as a file barba.umd.js, it stopped falling off - Yes, about him. It's just that if codesandbox couldn't work with barba, then the usual opacity wouldn't work, right? - By the way, this code works, but why doesn't the other option work? barba.init({ transitions: [{ name: 'opacity-transition', leave(data) { return gsap.to(data.current.container, { opacity: 0.2 }); }, enter(data) { return gsap.from(data.next.container, { opacity: 0.8 }); } }] }); - I don't know what to tell you. Flip is working as intended. IF you need help with Barba, it would be best to ask questions over on their slack. If you need finer control over the loading process, perhaps try using a framework with a router, like Vue. - I have already watched the video, I have practiced doing something from there, taking into account the recommendations of Cassie. And it worked. But here is not a separate working out of flip, namely in the transition. In the gsap topics, I found a description, only according to the old version of barba... I took your example, barba works, but the flip process itself seems to work only at the end, when the page has already opened, there is some flashing of the block on the page. But it does not work during the download process... As in the reference, in it, flip is in the process... https:/ / studio-size. com/ - screen Some simple animation of the transition is obtained, but the entire page as a whole, for example, opacity or zoomout, but not a separate block... There is clearly some nuance here that I don't understand and probably just don't know that it exists - Unfortunately, it does not work... Even if I add @id to img in data-flip-id, so that it is different, because img must be different in this data. But it doesn't work out... And on pages like cases.html there should be such a block... There may be a video inside the div.bgimage_block. Therefore, there is a block there. <div class="bgimage_block" style="background-image: url(); "> </div> Barba work, but without animation.. - Codesandbox's server doesn't seem to play nicely with barba. I did this locally, and it seemed to work fine, so it should help you get started. index.js gsap.registerPlugin(Flip); let flipState; barba.init({ transitions: [ { sync: true, beforeLeave(data) { const target = data.current.container.querySelector("[data-flip-id]"); if (target) { flipState = Flip.getState(target); } }, enter(data) { const target = data.next.container.querySelector("[data-flip-id]"); if (target && flipState) { gsap.set(data.current.container, { opacity: 0, position: "absolute" }) return Flip.from(flipState, { targets: target, absolute: true, scale: true }); } } } ] }); <a href="cases.html">Cases</a> <div> <img data- </div> </main> </div> <script src=""></script> <script src=""></script> <script src=""></script> <script src=""></script> <script src="index.js"></script> </body> </html> <a href="index.html">Home</a> <div> <img data- </div> </main> </div> <script src=""></script> <script src=""></script> <script src=""></script> <script src=""></script> <script src="index.js"></script> </body> </html> You're probably going to like dynamically adding a data-flip-id attribute on click. myElement.setAttribute("data-flip-id", "img");
https://greensock.com/search/?q=barba&updated_after=any&page=1&sortby=newest
CC-MAIN-2022-05
refinedweb
1,202
66.94
This preview shows page 1. Sign up to view the full content. Unformatted text preview: til it can be reaped by the parent. End Aside. If the parent process terminates without reaping its zombie children, the kernel arranges for the init process to reap them. The init process has a PID of 1 and is created by the kernel during system initialization. Long-running programs such as shells or servers should always reap their zombie children. Even though zombies are not running, they still consume system memory resources. A process waits for its children to terminate or stop by calling the waitpid function. #include <sys/types.h> #include <sys/wait.h> pid t waitpid(pid t pid, int *status, int options); returns: PID of child if OK, 0 (if WNOHANG) or -1 on error The waitpid function is complicated. By default (when options = 0), waitpid suspends execution of the calling process until a child process in its wait set terminates. If a process in the wait set has already 410 CHAPTER 8. EXCEPTIONAL CONTROL FLOW terminated at the time of the call, then waitpid returns immediately. In either case, waitpid returns... View Full Document This note was uploaded on 09/02/2010 for the course ELECTRICAL 360 taught by Professor Schultz during the Spring '10 term at BYU. - Spring '10 - Schultz - The American Click to edit the document details
https://www.coursehero.com/file/5936594/If-a-process-has-a-pending-signal-of-type-then-any-subsequent/
CC-MAIN-2017-09
refinedweb
229
65.93
Here is what I have but it does not work properly.....what am I doing wrong? #include <iostream> #include <time.h> using namespace std; int main(void) { srand(static_cast<int> (time(NULL))); int upperLimit = 10; int lowerLimit = 1; int userGuess = 0; int count = 0; int randNum = 0; const int exit = -1; cout << "***Welcome to the Guessing Game!***\n"; srand (int(time(NULL))); randNum = (rand() % (upperLimit - lowerLimit + 1)) + lowerLimit; cout << "Pick a number between 1-10 (-1 to Exit): "; cin >> userGuess; do { if (userGuess < randNum) cout << endl << "Too low, try again." << endl; else if (userGuess > randNum) cout << endl << "Too high, try again." << endl; else if (userGuess == randNum) cout << endl << "You've guessed wisely" << endl; else cout << "The number was " << randNum << " you gave up after " << count << " guesses.\n"; count++; cin >> userGuess; } while(userGuess != randNum && userGuess > 0); cout << "The number was " << randNum << " it took you " << count << " guesses.\n"; return 0; }
http://www.dreamincode.net/forums/topic/126935-guessing-game/page__p__773401
CC-MAIN-2013-20
refinedweb
147
66.44
Can you use image processing techniques to find objects? Are your eyes tired of looking for hidden objects in games? Do you want an easy way to do it? Friends and family, may I introduce my algorithm to ease your worries. We will use template matching in finding these objects. What is template matching? Template matching is an image processing technique of finding the template image (the image that we want to find) from a source image (the bigger image where we need to search for the image). The intuition behind this is to compare the pixel values of the… Can Neural Networks and Image Processing be able to solve sudoku puzzles?: Hello everyone. In this post, we will extract data using image processing techniques. We have two images of leaves here as our example, and the goal is to process the images for machine learning classification. We will be using python libraries. Let’s start. Overall Process In this post, we will show how to extract data for both neural networks and traditional machine learning approaches. The overall process consists of the following steps: Binarization Binarization involves changing the image from a multitone image into a two-tone image (black and white). If… Hello, welcome again to another blog post for the application of skimage. We will try to adjust the image by adjusting their Cumulative Distrubtion Function (CDF) of their intensities. We will again import the required python libraries. import numpy as np import matplotlib.pyplot as plt import pandas as pdfrom skimage.io import imshow, imread from skimage.color import rgb2gray from skimage import img_as_ubyte, img_as_float from skimage.exposure import histogram, cumulative_distribution Let’s check the first time image below. The image is dark, and we want it to look a little bit brighter. Let’s check first the histogram value of the image in grayscale space… In this article, we will focus on how to handle images with color overcast. These kinds of images usually happen when you have either an underexposure or overexposure. Let’s get started by uploading the needed libraries. import numpy as np import matplotlib.pyplot as plt import pandas as pd from skimage.io import imshow, imread from skimage.util import img_as_ubyte The image has mostly bluish in color and overcast. Let's investigate the RGB spectrum of the image. image_overcast = imread(“siemreap.jpg”) rgb_list = [‘Reds’,’Greens’,’Blues’] fig, ax = plt.subplots(1, 3, figsize=(15,5), sharey = True) for i in range(3): ax[i].imshow(image_overcast[:,:,i], cmap = rgb_list[i]) ax[i].set_title(rgb_list[i], … Python is a fun programming language for exploring around with color spaces in images. I hope this story will give you some ideas on the basic types of color spaces. In this story, we will use Yuyuko as our sample. Let’s get started first with some important libraries in Python. import numpy as np import matplotlib.pyplot as plt from skimage import img_as_uint from skimage.color import rgb2hsv from skimage.io import imshow, imread from skimage.color import rgb2gray from skimage.color import rgb2lab The RGB Model is a color model that can produce various colors by “additive” combinations of the primary colors… Learning more about Image Processing using Python
https://kdtabongds.medium.com/?source=post_internal_links---------3----------------------------
CC-MAIN-2021-17
refinedweb
537
59.7
I just returned from the Perl Dancer Conference, held in Vienna, Austria. It was a jam-packed schedule of two days of training and two conference days, with five of the nine Dancer core developers in attendance. If you aren't familiar with Perl Dancer, it is a modern framework for Perl for building web applications. Dancer1 originated as a port of Ruby's Sinatra project, but has officially been replaced with a rewrite called Dancer2, based on Moo, with Dancer1 being frozen and only receiving security fixes. The Interchange 5 e-commerce package is gradually being replaced by Dancer plugins. Day 1 began with a training on Dancer2 by Sawyer X and Mickey Nasriachi, two Dancer core devs. During the training, the attendees worked on adding functionality to a sample Dancer app. Some of my takeaways from the training: - Think of your app as a Dancer Web App plus an App. These should ideally be two separate things, where the Dancer web app provides the URL routes for interaction with your App. - The lib directory contains all of your application. The recommendation for large productions is to separate your app into separate namespaces and classes. Some folks use a routes directory just for routing code, with lib reserved for the App itself. - It is recommended to add an empty .dancer file to your app's directory, which indicates that this is a Dancer app (other Perl frameworks do similarly). - When running your Dancer app in development, you can use plackup -R lib bin/app.psgi which will restart the app automatically whenever something changes in lib. - Dancer handles all the standard HTTP verbs, except note that we must use del, not delete, as delete conflicts with the Perl keyword. - There are new keywords for retrieving parameters in your routes. Whereas before we only had param or params, it is now recommended to use: - route_parameters, - query_parameters, or - body_parameters - all of which can be used with ->get('foo') which is always a single scalar, or ->get_all('foo') which is always a list. - These allow you to specify which area you want to retrieve parameters from, instead of being unsure which param you are getting, if identical names are used in multiple areas. Day 2 was DBIx::Class training, led by Stefan Hornburg and Peter Mottram, with assistance from Peter Rabbitson, the DBIx::Class maintainer. DBIx::Class (a.k.a. DBIC) is an Object Relational Mapper for Perl. It exists to provide a standard, object-oriented way to deal with SQL queries. I am new to DBIC, and it was a lot to take in, but at least one advantage I could see was helping a project be able to change database back-ends, without having to rewrite code (cue PostgreSQL vs MySQL arguments). I took copious notes, but it seems that the true learning takes place only as one begins to implement and experiment. Without going into too much detail, some of my notes included: - Existing projects can use dbicdump to quickly get a DBIC schema from an existing database, which can be modified afterwards. For a new project, it is recommended to write the schema first. - DBIC allows you to place business logic in your application (not your web application), so it is easier to test (once again, the recurring theme of Web App + App). - The ResultSet is a representation of a query before it happens. On any ResultSet you can call ->as_query to find the actual SQL that is to be executed. - DBIx::Class::Schema::Config provides credential management for DBIC, and allows you to move your DSN/username/password out of your code, which is especially helpful if you use Git or a public GitHub. - DBIC is all about relationships (belongs_to, has_many, might_have, and has_one). many_to_many is not a relationship per se but a convenience. - DBIx::Class::Candy provides prettier, more modern metadata, but cannot currently be generated by dbicdump. - For deployment or migration, two helpful tools are Sqitch and DBIx::Class::DeploymentHandler. Sqitch is better for raw SQL, while DeploymentHandler is for DBIC-managed databases. These provide easy ways to migrate, deploy, upgrade, or downgrade a database. - Finally, RapidApp can read a database file or DBIC schema and provide a nice web interface for interacting with a database. As long as you define your columns properly, RapidApp can generate image fields, rich-text editors, date-pickers, etc. The training days were truly like drinking from a firehose, with so much good information. I am looking forward to putting this into practice! Stay tuned for my next blog post on the Conference Days.
http://blog.endpoint.com/2015/10/perl-dancer-conference-2015-report.html
CC-MAIN-2017-30
refinedweb
765
62.17
This is the second article in a three article series focusing on AJAX or Asynchronous JavaScript and XML. If you haven�t read Part 1, I encourage you to read Part 1 of this article series. In Part 1, we had covered the client side portion of AJAX, specifically the JavaScript object used to initiate asynchronous web requests. The goal of this article is to provide something easy for the server side (ASP.NET) developer to integrate into their code (preferably without a lot of fancy or proprietary workarounds). We will be going over the example code (see Download source files above) shown in listings 1 and 2, Article.aspx and Article.aspx.cs respectively. The code samples are shown in C# but the technique can be used with any .NET compliant language. Article.aspx is an example of how you might apply AJAX to a real web application. More and more web sites are requiring users to register before they can access most of the sites� content. This usually involves selecting a username and password as well as entering some other information like an email address or zip code. Rarely does a site allow two users to have the same username. So let�s say I choose �JohnDoe� as my username, I click the button to sign up, wait a while, only to have the web page reload saying that the username I have requested is not available. This process could go on for a while until I choose a unique username. AJAX allows us to validate the username as the user is typing it. No posting back to the server and no downloading a complete list of unavailable usernames. Article.aspx shows this �instant validation� in action. To validate the username being entered, we need to send the current username to the server, wait for the request to complete, then take action based on the result. I have simplified this process by creating the CallBackObject (see AJAX Was Here � Part 1 for the details). var Cbo = new CallBackObject(); Cbo.OnComplete = Cbo_Complete; Cbo.OnError = Cbo_Error; Here we are creating a new CallBackObject and telling the object, �When my web request is completed, I want you to run the function Cbo_Complete�, and �If anything bad happens during the request, I want you to run the function Cbo_Error�. Since the web request is asynchronous, we don�t know when exactly the server will complete our request, so we setup the OnComplete event so we don�t have to sit and twiddle our thumbs. function Cbo_Complete(responseText, responseXML) { var msg = document.getElementById('lblMessage'); if( responseText == 'True' ) { msg.innerHTML = 'CallBack - Username Available!'; msg.style.color = 'green'; } else { msg.innerHTML = 'CallBack - Username Unavailable!'; msg.style.color = 'red'; } } When our web request does finish, we want to let the user know. Our web request will return True if the username entered is available, or False if the username entered is not available. If the username is available, we show a positive message with green text, otherwise, we show a negative message in red text. function Cbo_Error(status, statusText, responseText) { alert(responseText); } If an error occurs during the web request, we show the error using a standard alert box. function CheckUsername(Username) { var msg = document.getElementById('lblMessage'); if( Username.length > 0 ) { Cbo.DoCallBack('txtUsername', ''); } else { Cbo.AbortCallBack(); msg.innerHTML = ''; } } The function CheckUsername starts the asynchronous request (or Call Back) to the server. First, we make sure that the username in question is not blank, and then we call Cbo.DoCallBack, passing the ID of the username input box (more on this later). If the username is blank, we cancel any Call Back currently in process, and clear any messages. <asp:TextBox id=txtUsername onkeyup=CheckUsername(this.value);</asp:TextBox> Finally, within the HTML, we set the onkeyup attribute of our username textbox to execute CheckUsername and pass the current value of the textbox. We use onkeyup so that we can provide instant feedback to the user as they are entering their desired username. That is all the client side code required to make our AJAX example work. The beauty of the CallBackObject is that it essentially allows JavaScript code to fire server side events. Let�s look at the HTML snippet again. <asp:TextBox id=txtUsername onkeyup=CheckUsername(this.value);</asp:TextBox> Notice how we set OnTextChanged (which is a server side event) to txtUsername_TextChanged. As we will see in a moment (see Listing 2), txtUsername_TextChanged is an ASP.NET event written in C# that determines if the value of txtUsername is an available username. This event is raised from the client using JavaScript by this line: Cbo.DoCallBack('txtUsername', ''); Isn�t that cool? You are raising server side events with client side code without reloading the entire page. Let�s look a little deeper at txtUsername_TextChanged. protected void txtUsername_TextChanged(object sender, System.EventArgs e) { if( !CallBackHelper.IsCallBack ) return; string uName = txtUsername.Text; try { CallBackHelper.Write( IsUsernameAvailable(uName).ToString() ); } catch( Exception ex ) { CallBackHelper.HandleError( ex ); } } First off, we check to see if the current request is a Call Back, using the CallBackHelper (included in the source files). This is exactly the same as using Page.IsPostBack, you want to do different things depending on the context of the request. Next, we get the value of txtUsername and pass it to our IsUsernameAvailable function. This function returns a boolean indicating whether or not the username is available. Finally, we write that value back to the client using CallBackHelper.Write. Notice how we wrap the processing in a try/ catch block and if anything bad happens, we use CallBackHelper.HandleError. This makes sure that the client is notified of errors. private bool IsUsernameAvailable( string Username ) { bool isAvailable = true; switch( Username.ToLower() ) { case "bill": case "william": case "christopher": case "pierce": case "zonebit": isAvailable = false; break; } return isAvailable; } IsUsernameAvailable is a simple test function, normally you would lookup the requested username in a database to see if it was valid. I have also but an asp:button on the page to show the difference between our call back implementation and the standard ASP.NET implementation. If you click the �Check Username Availability� button, you post back the entire page to validate the username, which may take a while longer, plus the user cannot continue filling out the form until the server returns the results. Cruise on over to this site to see AJAX in action. Type a letter, and you should get a visual indicator of the availability of the username. Continue typing and you should be updated as you go (try bill or pierce for an unavailable username). You can also click the button to perform the same action using a standard post back. We could have returned any string to the client, including XML. In this example we only need a simple true/ false to indicate if the username is available. Another example might be sending a zip code from the client and returning the city and state from the server. The sky is the limit now that you can interact with server side code from the client. ASP.NET takes care of the event plumbing for us. As long as you use ASP.NET controls and implement standard events, SelectedIndexChanged, TextChanged, Click, you can easily use AJAX in your ASP.NET applications with the CallBackObject and CallBackHelper. If you�re not already tired of reading my rants, check out Part 3 of the article series when we create an Auto Complete Textbox. Auto Complete Textbox is an ASP.NET control that completes your text as you type. Awesome! <%@ Page</script> </HEAD> <form id="frmAjax" method="post" runat="server"> ', ''); } else { Cbo.AbortCallBack(); msg.innerHTML = ''; } } function Cbo_Complete(responseText, responseXML) { var msg = document.getElementById('lblMessage'); if( responseText == 'True' ) { msg.innerHTML = 'CallBack - Username Available!'; msg.style.color = 'green'; } else { msg.innerHTML = 'CallBack - Username Unavailable!'; msg.style.color = 'red'; } } function Cbo_Error(status, statusText, responseText) { alert(responseText); } </script> <table width="100%"> <tr> <td>Username:</td> <td> <asp:TextBox </td> <td align="left" width="100%"> <asp:Label </td> </tr> <tr> <td colspan="3" align="left"> <asp:Button </td> </tr> </table> </form> using System; using System.Collections; using System.ComponentModel; using System.Data; using System.Drawing; using System.Web; using System.Web.SessionState; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.HtmlControls; using WCPierce.Web; namespace AJAX { public class Article : System.Web.UI.Page { protected System.Web.UI.WebControls.TextBox txtUsername; protected System.Web.UI.WebControls.Label lblMessage; protected System.Web.UI.WebControls.Button btnCheckUsername; private void Page_Load(object sender, System.EventArgs e) { } #region Web Form Designer generated code protected void txtUsername_TextChanged(object sender, System.EventArgs e) { if( !CallBackHelper.IsCallBack ) return; string uName = txtUsername.Text; try { CallBackHelper.Write( IsUsernameAvailable(uName).ToString() ); } catch( Exception ex ) { CallBackHelper.HandleError( ex ); } } protected void btnCheckUsername_Click(object sender, System.EventArgs e) { string uName = txtUsername.Text; if( IsUsernameAvailable( uName ) ) { lblMessage.Text = "Server - Username Available!"; lblMessage.ForeColor = Color.Green; lblMessage.Visible = true; } else { lblMessage.Text = "Server - Username Unavailable!"; lblMessage.ForeColor = Color.Red; lblMessage.Visible = true; } //Simulate 5 second delay System.Threading.Thread.Sleep(5000); } private bool IsUsernameAvailable( string Username ) { bool isAvailable = true; switch( Username.ToLower() ) { case "bill": case "william": case "christopher": case "pierce": case "zonebit": isAvailable = false; break; } return isAvailable; } } } General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/ajax/AJAXWasHere-Part2.aspx
crawl-002
refinedweb
1,539
60.01
Stock Price Change Forecasting with Time Series: SARIMAX Author(s): Avishek Nag Machine Learning, Statistics High-level understanding of Time Series, stationarity, seasonality, forecasting, and modeling with SARIMAX Time series modeling is the statistical study of sequential data (may be finite or infinite) dependent on time. Though we say time. But, time here may be a logical identifier. There may not be any physical time information in a time series data. In this article, we will discuss how to model a stock price change forecasting problem with time series and some of the concepts at a high level. Problem Statement We will take Dow Jones Index Dataset from UCI Machine Learning Repository. It contains stock price information over two quarters. Let’s explore the dataset first: We can see only a few attributes. But there are other ones also. One of them is “percent_change_next_weeks_price”. This is our target. We need to forecast it for subsequent weeks given that we have current week data. The values of the ‘Date’ attribute indicate the presence of time-series information. Before jumping into the solution, we will discuss some concepts of Time Series at a high level for our understanding. Definition of Time series There are different techniques for modeling a time series. One of them is the Autoregressive Process (AR). There, a time series problem can be expressed as a recursive regression problem where dependent variables are values of the target variable itself at different time instances. Let’s say if Yt is our target variable there are a series of values Y1, Y2,… at different time instances, then, for all time instance t. Parameter µ is the mean of the process. We may interpret the term as representing “memory” or “feedback” of the past into the present value of the process. Parameter ф determines the amount of feedback and ɛt is information present at time t that can be added as extra. Definitely here by “process”, we mean an infinite or finite sequence of values of a variable at different time instances. If we expand the above recurrence relation, then we get: It is called the AR(1) process. h is known as the Lag. A Lag is a logical/abstract time unit. It could be hour, day, week, year etc. It makes the definition more generic. Instead of the only a single previous value, if we consider p previous values, then it becomes AR(p) process and the same can be expressed as: So, there are many feedback factors like ф1, ф2,..фp for AR(p) process. It is a weighted average of all past values. There is another type of modeling known as MA(q) process or Moving Average process which considers only new information ɛ and can be expressed similarly as a weighted average: Stationarity & Differencing From all of the equations above, we can see that if ф or θ< 1 then the value of Yt converges to µ i.e., a fixed value. It means that if we take the average Y value from any two-interval, then it will always be close to µ, i.e., closeness will be statistically significant. This type of series is known as Stationary time series. On the other hand, ф > 1 gives explosive behavior and the series becomes Non-stationary. The basic assumption of time series modeling is stationary in nature. That’s why we have to bring down a non-stationary series to a stationary state by differencing. It is defined as: Then, we can model ∆Yt again as time series. It helps to remove explosiveness as stated above. This differencing can be done several times as it is not guaranteed that just doing it one time will make the series stationary. ARIMA(p,d,q) process ARIMA is the join process modeling with AR(p), MA(q), and d times differencing. So, here Yt contains all the terms of AR(p) and MA(q). It says that, if an ARIMA(p,d,q) process is differentiated d times then it becomes stationary. Seasonality & SARIMA A time series can be affected by seasonal factors like a week, few months, quarters in a year, or a few years in a decade. Within those fixed time spans, different behaviors are observed in the target variable which differs from the rest. It needs to be modeled separately. In fact, seasonal components can be extracted out from the original series and modeled differently as said. It is defined as: where m is the length of the season, i.e. degree of seasonality. SARIMA is the process modeling where seasonality is mixed with ARIMA model. SARIMA is defined by (p,d,q)(P, D, Q) where P, D, Q is the order of the seasonal components. SARIMAX & ARIMAX So far, we have discussed modeling the series with target variable Y only. We haven’t considered other attributes present in the dataset. ARIMAX considers adding other feature variables also in the regression model. Here X stands for exogenous. It is like a vanilla regression model where recursive target variables are there along with other features. With reference to our problem statement, we can design an ARIMAX model with target variable percent_change_next_weeks_price at different lags along with other features like volume, low, close, etc. But, other features are considered fixed over time and don’t have lag dependent values, unlike the target variable. The seasonal version of ARIMAX is known as SARIMAX. Data Analysis We will start by analyzing the data. We will also learn some other concepts of time series along with the way. Let’s first plot AutoCorelation Function(ACF) and PartialAutoCorelation Function (PACF) using statsmodel library: import statsmodels.graphics.tsaplots as tsa_plots tsa_plots.plot_pacf(df['percent_change_next_weeks_price']) And then ACF: tsa_plots.plot_acf(df['percent_change_next_weeks_price']) ACF gives us the correlation between Y values at different lags. Mathematically covariance for this can be defined as: A cut-off in the ACF plot indicates that there is no sufficient relation between lagged values of Y. It is also an indicator of order q of the MA(q) process. From the ACF plot, we can see that ACF cuts off at zero only. So, q should be zero. PACF is the partial correlation between Y values, i.e., the correlation between Yt and Yt+k conditional on Yt+1,..,Yt+k-1. Like ACF, a cut-off in PACF indicates the order p of the AR(p) process. In our use case, we can see that p is zero. Decomposing components We will now, how many components are there in the time series. The first graph shows the actual plot, the second one shows the trend. We can see that there is no specific trend (upward/downward) of the percentage_change_next_weeks_price variable. But seasonal plot reveals the existence of seasonal components as it shows waves of ups & downs. Stationarity check — ADF test The characteristic equation of the AR(p) process is given by: From our previous discussion, we can say that an AR(1) process is stationary if ф < 1 and for AR(p), it should be ф1 + ф2+..+фp <1. So the if the solution of the characteristic equation is of the form: i.e, if it has unit-roots, then the time series is not stationary. We can formally test this with Augmented Dicky-Fuller test like below: As the p-value is less than 0.05, so the series is stationary. Building the model We will start building the model. Pre-processing We will do some pre-processing like converting categorical variable stock to numerical, removing the ‘$’ prefix from price attributes, and fill all null values with zero. We will also separate out the target & feature variables. We will split the dataset into training & test. TimeSeriesSplit incrementally splits the data in a cross-validation manner. We have to use the last X_train, X_test set. Auto-modeling We will use auto_arima from pmdarima library. It tries out with different SARIMAX(p,d,q)(P,D,Q) models and chooses the best one. We used X_train as exogenous variables and seasonal start order m as 2(i.e., start from m for trying out different seasonal orders). We got the output as below: So, what we analyzed in the Data Analysis section came out true. auto_arima checks stationarity, seasonality, trend everything. The best model has p=0, q=0 and as the model is stationary, d=0. But, as we saw it has some seasonal components, its order is (2,0,1). We will build the model with statsmodels and training dataset Model details (clipped): It shows all feature variable weights. Forecasting Before testing the model we need to discuss the difference between prediction & forecast. In a normal regression/classification problem, we use the term prediction very often. But, time series is a little different. Forecasting always considers lags into account. Here, to predict the value of Yt, we need value of Yt-1. And of course, Yt-1 will also be a forecasted value. So, it is a sequential & recursive process rather than random. Mathematically, for an AR(2) process, ^Yn and ^Yn-1 are the previous forecasted values. This way the chain continues. In the case of ARIMAX, feature values are not dependent on time, so when we do a forecast, we feed previous Y values along with the same feature X values. Now, its time to test the model: from sklearn.metrics import mean_squared_error mean_squared_error(result, Y_test) We will plot the actual vs predicted results. We can also see the error distribution. model.plot_diagnostics() plt.tight_layout() plt.show() Errors are normally distributed with zero mean and constant variance which is a good sign. Jupyter notebook can be found here: avisheknag17/public_ml_models Recently I authored a book on ML ( Stock Price Change Forecasting with Time Series: SARIMAX was originally published in Towards AI — Multidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story. Published via Towards AI
https://towardsai.net/p/machine-learning/stock-price-change-forecasting-with-time-series-sarimax
CC-MAIN-2022-21
refinedweb
1,658
57.57
Coding Style Part 2 Programming As I mentioned in the last post, I read the Office document that describes the internal coding conventions of id Software, and I thought I’d go over it. This will be pretty familiar stuff for coders, but if you don’t program this might give you a glimpse of how strange and fussy this discipline can get. I’m going go go through the standards guide and offer my own comments / explanations on why I think they’re interesting or important. The stuff in bold is from id, the rest is from me. “Use real tabs that equal 4 spaces.” Ah. The old “tabs vs. spaces” holy war. This one is probably as old as C itself, and may even pre-date it by reaching back into older languages. In C++, you’re expected to indent your code: When you indent code, does hitting the TAB key insert a single tab character that moves the cursor to the next tab stop, or does it insert the number of spaces required to reach the next tab stop? You can set it to work either way, but you had better make sure you’re on the same page as the other coders. Let’s say you’ve got tab stops set to 4 spaces. If you’re using actual spaces, then internally your file looks like: While tabs produce a file like this: See, compilers are primitive command-line programs. Sure, you might be writing code in a fancy windowed environment, but when you hit compile your source code is handed off to a text parser that’s blind to your decadent GUI interface. It has no idea what your tab stops are set to and it doesn’t care. It just counts whitespace characters, and as far as it’s concerned spaces, tabs, and all other non-printing characters look the same. So when it reports that it sees an error on line 5, column 3, (Variable ‘aa’ is undefined.) it doesn’t realize that for YOU the problem appears to be in column 12. I’ve never personally run into this problem, but I’ve read people complaining about it. I’m sure it all depends on what compiler / editor combination you’re using. So, spaces must be the way to go, right? Except… Spaces are fixed. If we use tabs then different coders can adjust things to suit their own preferences. I can set my tab stops to a sprawling eight. It will eat up a ton of horizontal space, but it will make the formatting very clear. Perhaps you will set your tab stops to a miserly two. You’ll have lots of information on screen at once, but you might need to squint a bit to follow the code when things get complicated. The point is, if we use tabs instead of spaces then we can each see the code the way we want to see it. You can even change how its displayed while you’re looking at it if you need more room or clarity. Then again, if you’ve formatted something specifically using a given tabstop arrangement, it might fly apart under a different one. These diagrams: …would collapse into a soup of random characters if they were built with tabs and someone viewed them with the wrong settings. In a less outlandish example, doing non-leading formatting like this: …with tabs will lead to tears and confusion when someone looks at it with different-sized tabs. Then again, you could always use tabs when formatting code and spaces when drawing ascii diagrams or arranging stuff into columns. Then again AGAIN, mixing tabs and spaces is a great way to drive someone mad. When you traverse over whitespace using the arrow keys, you do not want to be in a situation where you don’t know if you’re passing over spaces or tabs. Inserting or deleting spaces mixed with tabs can feel jumpy and random in a way that leads to typos and swearing. You’re free to argue the merits of tabs over spaces, but there’s no disputing that the worst thing to do is mix them together. Oddly enough, I always use spaces even though I think tabs are better. I spent a decade working in a Thou Shalt Not Tabbify environment, and now when I traverse tabbed code the cursor-jumps make my eye twitch. Like learning to play an FPS with mouse inverted, it would have been better if I’d learned the other way but changing now would be prohibitively difficult. I’d stumble through tabs if it was part of earning a paycheck, but if I’m working alone on my own projects I’d just as soon be comfortable. A non-programmer might ask, innocently enough, “Why don’t you just convert the tabs to spaces? One person uses spaces, then you can change them to tabs before you use the file. It should be easy to write a program to do it for you.” Indeed this is easy. In fact, I think most environments have this sort of thing built in. In some environments it may even have a handy keyboard shortcut. Just select all and click “auto-format” or “tabbify” or whatever. The problem here is that we use revision control (or whatever the kids are calling it these days) to manage changes to all of these multi-author text files. To make a change, you “check out” a file, similar to checking a book out of the library. You make changes to the file, and then you check it back in*. The system will then offer the other coders a nice summary of what changed. They can see, line-for-line, what was added, removed, or altered. However, if you re-tab an entire document, then every indented line of the file will be different, which will make it appear as though you re-wrote the whole thing from scratch. The resulting chaos would be worse than the problem you were trying to solve. * Note to coders: Do NOT nitpick me with merged changes, forks, branches, and other complexities. I’m just trying to throw a life preserver to the non-coders, and I don’t need you weighing it down with a cinderblock of source management theory. “The else statement starts on the same line as the last closing brace.” The guide is talking about doing this: Instead of this: I have no idea what madman decided the second was a good idea. In a complex block of code, this can throw away a ton of screenspace, and I don’t think it improves readability at all. This seems to be a page from the “things are less confusing if there’s less information on the screen” school of thought. This is an understandable sentiment in certain situations where you might find yourself daunted by walls of unbroken code, but let’s not build a coding convention to make all else statements take three whole lines of code for no reason. Spacing code out too much means you can’t see very much at any given time, which results in tunnel vision. I’ve always suspected the three-line else (like other screen-devouring conventions) was invented by people who got paid by the line of code. According to programming lore, there was a time when managers would measure programmer output by how many lines of code they’d written. This was ostensibly a real thing done by human beings who were at least smart enough to operate a necktie. The quote “Measuring programming progress by lines of code is like measuring aircraft building progress by weight,” is attributed to Bill Gates. This was back in the days when Microsoft was a smallish company and they were making software for IBM, who purportedly liked to measure progress in this way. “Pad parenthesized expressions with spaces” We’re talking about this: Instead of this: This is the first one that I’m really not crazy about, and given the type of work they do I’m kind of surprised id Software went this way. When you do a lot of 3D programming, you end up with a LOT of stuff in complex nested parenthetical expressions. x = (x-(y-(z*2))) Parenthesis usually get a space before the opening and after the ending, since you’re trying to isolate the stuff IN the parens from the stuff OUT of them. By adding another space on the inside, you’re basically forcing every open paren to take 3 characters and every close paren to take another 3. That’s a lot of horizontal space to spend, and I don’t see how it improves readability. I admit I’m straying into really wishy-washy subjective stuff here, but for me I think of parens as little capsules that enclose and isolate their contents. To take an extreme case: This is dense, and it’s probably hard to follow the parenthesis pairing… This makes the grouping a little easier to see… And this is just as visually confusing as the first example, except wider… Granted: When you’re nesting stuff three and four levels deep, it might be time to consider breaking the expression up onto multiple lines. Still, I’m using this extreme example to help illustrate what I’m talking about. So yes, coders will argue about this stuff. Serious people with fancy degrees will sit around and argue – at length – about how they can optimally arrange blank spaces. To be continued… Indent with tabs, align other things with spaces! I like to decide indentation-width per-computer! :-) …I can understand arrowing across tabbed indentation might seem weird after so many years, but I think most IDEs let you hit “home” multiple times to switch between the beginning of the line, and the first non-whitespace character. That’s probably the 1337 way to be navigating around the beginning of the line anyway. I like this approach. It really doesn’t matter with a good editor though. At the beginning of the line? Hit tab and the editor should take you to the appropriate column. I get very frustrated with editors that don’t understand that I don’t want to count whitespace characters. I prefer to even put my braces for the if on a separate line: if(a) { stuff } else { stuff } (the stuff would be indented) Yes, me to .. i don’t like if the “stuff” sticks to the conditional statement, especially if its a bit longer. Also, as another poster pointed out, the scope is defined much more clearly. Also, i would write the expression this way: num = (x - (y - (z * 2) + 2) * foo(z)); Working in a Java-based company where we had no *official* style guide except that your co-workers would yell at you if you did something they didn’t like, I would do all of my braces on the next line to write the code, then move them to the same line just before check-in to avoid getting yelled at. I much prefer “brace on the next line.” As mentioned, it’s way easier to line up your scope, and it also makes finding those errant missing/extra braces much easier. Pretty much this. The most common error I’ve always come across as a uni student is missing braces. Seperating all braces from their conditional statments, and then indenting braces has always been the best way to avoid it. It’s not so much about putting less on the screen but to show which things go together. Each indent should have a brace on the second line and the last. With big nesting, you eat through space, but it’s hard to stuff it up. The problem gets worse with nested operations, where you can get the scope of multiple things wrong with a couple of missing braces, or worse, misplaced ones. It’s especially a problem on a limited time frame (For some reason we have practical tests)-if you write a massive wallocode, you’ll have multiple operations mixed together, and bad things happen. If you have the time to set it out and set up classes and methods, it’s a lot less likely to go wrong. If you’re really having that much of a problem with missing braces, there are all manner of plugins out there that can help. The best ones for this that I’ve found will insert a balanced pair automatically whenever you type an opener, e.g. “{” -> “{}”, and will highlight unbalanced pairs for you so you can tell when something might be missing. This obviously isn’t going to help in your practicals (wat) where you don’t have control of the environment, but the indented brace on its own line is so very, very odd it’s probably not a great habit. I used to do mostly that, I’d still keep the ‘else’ on the same line as the closing brace for the preceding condition block though. To my eye it was easier to scan down at a certain indentation level for the } and { characters in the same column. Using “} else {” breaks that. On the other hand I’ve been warming to the “it’s a big love-in” (they all cuddle up) braces convention of late. I can’t stand opening braces having their own line…also no spaces before parenthesis in a function header…my typical methods would look like this: public void doSomething() { //do something } Parenthesis actually don’t determine whitespace for me…operators do…I have a blank before and after each (binary) operator…this expression would look like fscan’s num = (x – (y – (z * 2) + 2) * foo(z)); I do like empty lines to separate blocks of code that belongs together, but at the same time I am quite fond of single line expressions, like simple if statements or try/catch…and especially the conditional operator… I’m sorry but that’s why i find such coding style ugly. Of course, in your example, there isn’t much to show so it feels even worse. In a real project however, vertical spacing is just so important to me that I even have my second screen (lg w220p) flipped just to show as much code as possible without having to scroll. I’ve always done the same. I find that the most difficult part of reading code for me is tracing which blocks go together. And the indenting of the code isn’t always enough–especially with long lines of code and 80-character wide terminals/printouts. So braces ALWAYS go on their own line for me, to make it far easier to match top to bottom. I’m curious, though–do braces get the indent of the contained block, or the conditional statement? Mine don’t get the indent, but I’ve seen both ways, and never really had a major preference. I’m used to keeping conditional statements on a separate line from scope specifiers – i.e. I do this: It’s from an ease of debugging point of view. With a few keystrokes I can comment a line, and turning off an if or an else with a comment is very convenient. The else part is just a sad byproduct. In fact, I’ll sometimes write self-contained chunks of code in a scope anyway, just waiting for a potential condition: I do this too, and it makes it easier to copy blocks around when you restructure the code. Having the else attached to the start of the block is a nuisance. Also, I learned C right after learning Pascal, which uses “begin” and “end” keywords, not braces. Sticking those with your if and else keywords really looks ugly. Oh, right! That’s why I’m a “bracket on its own line” guy. It’s been so long since I did real programming, I forgot. I started with Pascal with begin and end, and I hate having starting braces on a new line. I doubted that having headless braces would work in PHP (my main language) but it does. Very useful though I don’t know if I can undo so many years of engrained habit. It also makes the code much easier to traverse if you can clearly see where each block begins and ends. I’m the worst coder ever, I go: if (shamus.length > 1000) { .return “Wow, you’re huge”; } else { .return “You’re not so big”; } Edit: According to Wikipedia, the “Compact control readability style”. This is actually what I usually do. Though I will TEMPORARILY move the brace down, after the “else”, when I count up the brace pairs at the end of the function, which you should always do before moving on to another thought. It’s so much faster to check this *now* when you can still remember how it *should* look, than later, that it is a worthwhile habit, even if you waste several minutes a day, every day. That line in Shamus’ example: “}else{” makes me want to chew glass. I can understand wanting to eschew } else { –but that’s just going too far. What’s your logic/reasoning for putting the opening brace on a new line? I’ve seen many projects/programmers use this style but it has never made sense to me and nobody has explained the appeal behind it. Don’t get me wrong, I almost always put the following code on a new line and indented with the exception of statements such as if ($foo) { return false; } or if ($bar) { continue; } otherwise it’s if ($foo) { //code here } The main reason, as far as I can tell, is that putting it on a new line indented to the statement that comes before it is so that you can easily tell where to expect a brace if you’re missing one – you know at what indentation it *should* be on. Personally, I still dislike it, though maybe that’s partially because I started with Python (Which enforces colons on the same line, IIRC), and also because most IDEs will tell you which brace is pointing to which brace. Yeeeeeeeees….. I actually used to put the { on the same line and } else { on one line and so on for a long time. I even used to put { after function names. I switched largely because of how (stupidly) MSVC auto-formats code. It turns out that if you need to split a branch condition to multiple lines, in MSVC the only way it’s going to get auto-formatted is “if (….\n\t…)” where \t stands for “one level of indent more than where the ‘if’ was indented” and unfortunately if you put { on the last line.. it means the conditional block will get this exact same indent. You could argue that such long branch conditions are “bad taste” but long function prototypes, C++ iterator loops, etc all lead to “wider than 80 characters” pretty fast. Since I like to regularly do “reformat whole file” (which in MSVC is C-a,C-k,C-f), my coding convention is generally constrained by “how can I get the pretties code from the auto-formatter”. If I was still using a more intelligent editor (used to use Vim) then I’d probably still use a more compact convention. So why auto-reindent? Well, if you make sure your code always auto-indents right, then doing a full reindent is a really quick way to check that all the blocks are like they should be. It’s also very helpful when refractoring, cut-pasting large blocks of code around… and as far as source control goes.. every half-decent VCS knows how to ignore white-space anyway. ;) Regarding the function parameter parenthesis.. from reading id’s published source code (eg Quakes), I’d bet they really mean to put spaces on parenthesis WHEN those are function call parenthesis (instead of just grouping). If you think about it that way, it actually makes sense: you can tell which parens are grouping and which ones are function call (in case you ever end up with an editor without parenthesis matching highlight). Hmm. I need to rethink how I organize my code. I’ve only taken a couple of high school java courses and am in a second college programming course, but I’ve already gained a few bad habits, especially in regards to braces. I’ve always visually separated my blocks using the braces, but that’s not the best way to do that – indentation is. (whether with spaces or tabs – tabs is easier for my lazy self, although I don’t yet have a strong opinion on the subject) You should be able to look at a program and just TELL where the different levels of the code are pretty easily without the braces, and as such they do not deserve their own line, since they’re more useful to the computer than to the user. The advantage of putting them on their own line or at the beginning of a line is in pairing them, but a decent compiler/IDE or whatever will tell you if you have too many or not enough and from that point it’s not as hard to check, especially if you’re consistent. Like you’ve said, consistency in programming style almost always trumps whether it’s actually BETTER. *looks at his own code sheepishly* if (a < b) { biggest = b; } else { biggest = a; } Er… It’s a private project and the important thing is that it’s readable to me? … Man, maybe it’s just me but why ‘biggest’ instead of ‘max’? (I think variable names might be the part where my coding style is technically the best) My braces might be worse than yours, though ! For what that code is doing, I don’t think max is a great descriptor myself. Max is okay for things like end conditions for loops, or other “end” indicators, but the example he gave looks more like a sorting condition. Biggest seems to suit it better. Just my opinion of course. :) well, ‘max’ and ‘biggest’ essentially mean the same thing. They’re both absolutes that imply more than 2 (otherwise you could use ‘bigger’). Not to mention I’m used to doing that exact check in a loop checking for a min/max value from a list/array. No they don’t, here greatest does mean what it contains while max would make me believe its the result of a counter of some sort. Not really. The way I look at it, the max and min are boundary conditions. Let’s say we’re talking about, oh, students in a classroom. The room has 40 seats, so the max is 40 and the min is, obviously, 0. But the largest class only has 34 students. So ‘biggest’ is 34. And if the smallest class just has 7 students, than ‘smallest’ is 7. biggest = max(a, b); In MySQL: MAX(), GREATEST(). Sooo annoying… I didn’t really pay much attention to the variable names. I was just copy/pasting what Shamus wrote, then moving the braces to show how I placed them. Variables are a whole ‘nother story, and I’d have to actually go look at my code to remember what I’ve been using. Well it could also be force of habit. For example, max is a built-in function in Python, IIRC. It also could refer to MaxMSP, the visual programming language. Wow. Whitesmiths. I’ve never run into anybody that actually used that. … As an advocate of the One, True Brace Style, I think I’m required to hate you with a passion that burns with the fire of a thousand suns. (Seriously, though, ew. Whatever works for you, man, but I couldn’t stand to read code that way. :) ) There’s a name for it? Oh, wonderful! None of the guides I was following actually used it. They had the opening braces up with the if and else statements, and I just moved them down because it was easier for me with the braces paired up visually on the same vertical line. ETA: It’s possibly because I’m a spacial thinker, and tend to need my visual environment arranged Just So, with all the elements aligning “properly” with each other as my brain dictates. I’ve always used the GNU style and while I’ve seen plenty of code in other formats, It’s always made my brain hurt Wow, I thought only RMS used GNU style. The GNU style is also an abomination unto Nuggan. But, hey, whatever works for you. I don’t even really use the K&R/1TBS; it’s just the closest approximation. Windows tends to favor Allman-style braces (yet another abomination unto Nuggan), which I find preferable to GNU or Whitesmiths, but still unnecessarily drawn out. I once likened it to “inserting dramatic pauses into a reading of a grocery list” (cf. a bad impression of William. Shatner as Captain! Kirk.) biggest = ( (a > b ) ? a : b); Sorry, couldn’t resist ;p (You could make a good case that your way is much easier to understand at first glance, but I think for something this simple I would usually prefer the ternary operator.) biggest = std::max(a,b); fixed :) I personally hate the ternary operator. I always forget if it’s (condition)?true:false; or (condition)?false:true;. So, for me, the ternary operator isn’t worth it because I have to google it every time to remember the order — too much work for ‘future me’ if the project lives. I find it easy to think of as a compressed if-then-else. I also don’t use it very often, because it’s not the easiest thing in the world to read. You know, that helps. For whatever idiot reason, I never noticed or connected that it’s in the same order as any if-else statement. Still, I don’t like it and will probably never use it, just in case future me forgets again. I have learned to love the conditional operator, because a few key compilers love it too, and will do a conditional move operation instead of a branch. Compilers are getting better at doing the same thing for actual ‘if’ statements now, but it doesn’t hurt to help them out a bit. Python’s ternary operator is spelled: expression if condition else expression where “if” and “else” are keywords. I think it does it very well. It’s also a good way to think of C’s. ? = “if” and : = “else”. The ternary operator is fine if the resulting expression is simple, clean, fits nicely on a single line, and doesn’t need to be repeated. Such an instance should be infrequent. In practice, this is not usually the case, and I’ve seen expressions with the ternary operator that ran for more than a hundred characters, were copied and pasted across dozens of lines, contained functions called on the results of functions called on the results of other functions … As near as I can tell, part of the problem is the novice discovers the ternary operator, and quickly goes looking for problems to solve with it, with it soon spread all over the codebase. It’s probably better not used at all than used badly. But, it’s best used sparingly. It should be part of your tool kit. A small part. If it’s any comfort: I use the exact same style. If it’s any comfort, Dinkumware’s implementation of STL (shipped with Microsoft C++ with some modifications) uses that style, or close. Just ignore all the deliberate _Uglyified_identifiers – they’re there to avoid macro clashes. OBJECTION! Edit: For some reason, the comment thing is eating my spaces :/ I write my else statements like this: . if (whatevs) { . stuff; . } . else { . otherStuff; . } The problem I have with putting the else on the same line as the if’s closing bracket is that it gives a false sense of scope; lines of code that are indented the same are generally in the same scope. The ‘else’ here should be indented the same as the ‘if’ because they’re part of the same block of code that is being processed sequentially. If you put the else after the closing brace the line IS indented the same, it’s just not the if and else that are aligned, it’s the if and the closing bracket. A little irritating, but technically true. I have a habit of aligning braces, though, which really wastes space. I also do my if/else blocks this way. I like how it makes the structure of each block consistent: [statement] { } It makes it a bit easier to skim the left-hand margin looking for branches. Of course, part of this is probably because I work with a lot of languages that have kind of wishy-washy block structure. If I can disregard everything after a closing brace, it helps in cases like this: $(foo).each(function() { ….//Do stuff }); Yeah, this is how I do if/else as well. I don’t think anyone would write code that looks like Shamus’ second example–if they’re putting the else block’s open bracket on its own line, they’re going to put the if statement’s bracket on its own line as well; whether people put opening brackets at the end of the statement or on its own line is a separate issue from the if/else thing. But yeah, the } else { bothers me, because I expect the if and else to line up visually. I have always used 2 spaced indents and avoided a tab, even though if you use tab ( you can change your tab for 2 and another developer for 4 and the ide will take care of it for you so a little more flexible developer to developer if everybody is on same page, rarely the case in large enough organization ). Few years ago, another developer altered my 2 spaced indents by changing every line making it difficult after a bug was discovered to find what line had changed because all lines were identified as changed in source safe. A pain in a large program when 2000+ lines of code all come up as changed to find one stupid bug. After that we decided what the teams standard was and checked all code out, modified it to the standard, and checked back in. One thing that drove me crazy about using Microsoft Visual Studio (At least the Web Designer) is that the default formatting for if then elses forced all of the curly-braces onto their own lines. I like having the open brace at the end of the same line that has the if if (x) { // Do stuff } else { // Do different stuff } but the default format for the C# in Visual Web Designer was if (x) { // Do Stuff } else { // Do Stuff } It drove me crazy. Also, a little note on the last part you wrote Shamus. From the wording (and the example given in the document) I’m not sure they intended for all parens to be padded, only the ones encapsulating arithmentic expressions. So it would clean it up a little at least to remove the padding around the function parameter list. num = ( x -( y – ( z * 2 ) + 2 ) * foo(z) ); It looks a little better I guess, but unless I was being paid by the keystroke, my ideal would be more like this: num = (x -(y – (z * 2) + 2) * foo(z)); Oh, fun with coding standards. Earlier this year I worked at a small company that was building some fairly big systems with only one programmer (until they hired me, I was programmer #2). One of the biggest problems I kept running into was that we were working with some legacy code left by a programmer who came before both of us, and his variable (and database entry!) names were rife with spelling errors and inconsistencies. He had some tables where every field was named something like txtFirstName or chkIsMarried, and then other tables where the field was just called “married”, and it was a string field, but a checkbox input on the form. Brutal stuff. Have you ever tried to do a bracketed switch? For some reason Visual Studio insists on doubling the indentation… switch (stuff) { ____case 0: ________{ ____________//The hell? ____________break; ________} ____case 1: ________{ ____________//Somebody stop this! ____________break; ________} ____default: ________{ ____________//Sigh… ________} } The best solution to ugly formatting with switch-statements is to not use them. They very often result in bugs when you forget the “break;”, they are not significantly faster after optimising, and they only work for very few types in most languages. Of course, Scala or C# have more powerful switch statements, but they also don’t bother so much with the pointless brackets to begin with. I’ll be honest, I don’t think I’ve ever seen a switch with brackets in it.. seems unnecessary. I always see them as switch(x){ case 0: …. break; case 1: …. break; default: …. break; } You need brackets if you’re going to declare variables inside of a case. On the compilers I’ve used, declaring a variable right after a case will generate and error or warning the it’s skipped by the switch statement, unless you enclose it. My programming was mostly limited to BASIC on a TI 99/4a with a theoretical 16KB of memory. Theoretical because we learned as we approached that limit that there was about 2KB used up by the system and unavailable for user programs. As our programs got semi-complicated we found ourselves having to eliminate every space, carriage return and meaningful character in variable names to get them to fit in the memory available. Readability was a luxury we could not afford. But then again, I was just a kid messing around. Actually, code size still matters for specific tasks … Minification You can tell your source code control software to run command line programs on files before checking them in – a leading space regularizing app can prevent those nasty “he reformatted the whole program” checkins And some source control systems already have hooks to do it automatically, so that whenever you check in a file it gets formatted to the server standard, and whenever you check out or diff a file it formats the file it sends you to the client standard. Great. Just hook in a brace-fixer with the whitespace fixer, and THE MAGIC OF TECHNOLOGY MAKES *EVERYBODY* HAPPY!!! The end of the style wars should be nigh. Except when you have multiple administration that can’t agree on the standard the server should stick with. If you have two admins who are both so anal that they fight over a storage convention that nobody should ever actually see, that probably isn’t the biggest problem you have… You would probably like Go. There’s a program that ships with the distribution called gofmt that ends all style wars forever. I usually, except when I don’t, have the too much whitespace way of: if(condition) { // stuff } else { // stuff } Though it’s more liking the braces being aligned with each other than anything else. I was going to say this, but you beat me to it. I like my code to be seen and read in easily-understood blocks, and I like to be able to determine which block a particular line is in by how deeply indented the line is, and which braces are supposed to match each other based on the same. This has the biggest impact when doing something that changes indentation levels; it’s easy to see based on indentation where the missing or extra brace is. If the brace is on the same line as the else, it’s harder to find. I have no objection to placing a single statement on the same line as the else, but if it’s a block it should start a new line for the brace. I recognize this has information density implications when displaying. I find that it helps justify the purchase of multiple large, high-resolution displays for use while writing code… Shamus, with the parens example at the end you cheated a little. The coding convetion calls for spaces before and after the parenthesis, but you also spaced operators! num = ( x - ( y - ( z * 2 ) + 2 ) * foo ( z ) ); should instead look like num = ( x- ( y- ( z*2 ) +2 ) *foo ( z ) ); Personally, I like the space before and after, as I don’t just want to isolate the code inside but draw attention to the fact that I am putting a paren there. Of course, too much whitespace can result in trying to draw attention to everything, which much like bolding everything in regular text, results in not drawing attention to anything. I wasn’t trying to cheat. The operator spacing was given in the id guide. Although the guide didn’t explicitly SAY operators HAD to be spaced, I sort of took it as a given. I was trying to show how spaces (or lack thereof) is more useful for visual grouping than “always put spaces around this thing”. What I meant was, if you’re talking about how whitespace affects parenthesis, then an apples to apples comparison should be used. If you’re going to bring up how the different rules interact, that would deserve a heading of it’s own. You can separate parenthesis by either applying whitespace to operators or to the parens themselves, and get different effects depending on how much whitespace you use. Using both rules will result in too much whitespace, as you show, but that’s a different argument. And this is exactly why there is a standard. We could go on for hours, not arguing about the question, but just discussing how the question should be phrased. Good gravy, it’s amazing anything ever gets done. If I wanted to be productive, I wouldn’t be browsing the web ;) …Or arguing about coding style, for that matter. :-P Not that I should talk. I’m sitting here disagreeing with most people in the comments on this post for one reason or another (…at least nobody has started talking about the GNU brace style, where braces are on separate lines, and indented *half* a level, while the code in each block is indented a full level), just trying not to say anything. :-) I’ve got to go for if(condition) { // code – indented! } else { // other code } .. with NO white space around parentheses. And, just to be picky, cinderblocks are quite light … maybe a paving slab? This is how I do it as well. I used to do {} on their own lines, but I’ve since switched to this. But no padding of parentheses after an if or a function/method name. OH LOOK, A LOT OF THE PEOPLE WHO FOLLOW SHAMUS’ BLOG ARE PROGRAMMERS, WEIRD! Augh! Magic numbers! You put that 0.25 in a proper global variable mister! Or here, use this: #define 0.25 0.618 I’m working in C# (which means I omitted a *lot* of unnecessary “object-oriented” boilerplate). #definedoesn’t work that way in C#. (Also, globals are evil, what is wrong with you.) I don’t like to create constants to hold magic numbers unless they’re used in multiple places. A constant that will be used once is a waste of time unless its meaning is obscure without hiding its value behind a descriptive variable name. Since this is (obviously) 25%–wait. Why am I justifying code style in a throwaway gag to somebody I’ve never met, on the Internet? Sometimes, I wish I could do away with braces entirely and just do indent-based enclosures. Such as the shortcut you can do with single line if/else statements, I wish you could do the same for the rest. No more stray squiggly lines, no more forgetting to close a statement somewhere: if (a>b) –bigger = a; –return bigger; else –bigger = b; –return bigger; more code; more code; where “-” is an indent. My preference, of course, is a single tab character, which is 4 spaces wide. I have an utter hatred of braces. But since I cannot do without for most cases, I use the “proper” way (lol) of making braces their own line. if (a>b) { –bigger = a; –return bigger; } else { –bigger = b; –return bigger; } Strange, but I just can’t stand to have a curly brace NOT be on it’s own line. *shrugs* Isn’t that what Python does, using whitespace to delimit blocks of code? That’s how Python works. I’m not crazy about it myself. When you’ve got a lot of different levels of indentation, it can get a little hard to parse without some good ol’ solid {} to look for. My advice is “try not to nest things too deep” which is good advice in any language. Personally I find Python’s syntax really nice 99% of the time, and the 1% of the time you need to do something deeply nested it’s going to be a pain to keep track of regardless. One of the many excellent bits in the Linux CodingStyle: In practice, nesting out to 4 or 5 is too convenient in the short term, and I also find it hard to visually interpret in Python, especially for long blocks when you “close” two or more blocks at once. I’ve got a Legacy VB6 (!) app at my workplace that’s pretty much the bread and butter of an ongoing project which has a colossal nest of if-thens that gets 7 layers deep at points and goes on for far too long. I leave it mostly alone because I just don’t want to have to unsnarl the bloody thing. The other question is: “Why do you use 80 characters to begin with?” If you can’t afford a bigger screen, you must be a really bad programmer, because that means you have not worked for money in a decade. It really annoys me when people start to put their 95 character lines on two lines, just so they fit the outdated 80 char limit. I’m not going to print it! Lack of emphathy, lack of imagination, and a gratuitous insult, all in one package. Some of us like to use our large modern monitors to display multiple columns of code, 2 or 3 wide, maybe even 4 or 5 if you’re a youngun’ with good eyes. When juggling a hairy ball of functionality across multiple files (ah, the joys of legacy code), it’s nice to be able to keep a big chunk of interrelated code on screen at once. It seems silly to have a 16:9/16:10 monitor and have big swaths of the screen be blank except for the occasional long line sticking out looking lost. If you’re working with other people with different fonts and font sizes, or even with yourself in the future who may have a larger monitor, there isn’t a single “correct” width. So if you’re picking an arbitrary number, why not 80, which is at least traditional and fits the default width the terminal window (since the context is Linux). I can’t speak for everyone, or even most languages, but my own experience, the experience of my coworkers, and presumably the experience of Linux kernal hackers, is 80 columns is reasonably comfortable for C and C++. Some lines will need wrapping, but it’s unusual enough to not be a bother. And, like any style rule, it should be applied with common sense and occasional exceptions. >Lack of emphathy, lack of imagination, and a gratuitous insult, all in one package. I’m efficient! There is no need to force everyone to use 80 characters and murder the traitors. Most lines will be shorter than that anyway, or rather, if all your lines are longer than that, then there are bigger problems about. On a 1920×1080 screen (why are we still using those?!) you can easily fit two columns of about 120 characters, which gives everyone 50% more space to work with. Just because *you* like to put four columns on a single screen doesn’t mean that I want to scroll through 5000 lines of code, and to make matters worse, if you enforce such a limit harshly, people will start to use shorter variable names (such as “val”), and then readability suffers very badly. Now, you can just get a second and third monitor, while I cannot make your vertical code any shorter, no matter what I do. So in the end, I ask for something that we can all work with, and you ask for everyone else to follow your personal needs. Who is the egoist now? Asking people to try to keep their lines somewhat short if possible is fine. Enforcing a strict rule is not. Enforcing a strict rule makes it so that I can configure my editor to open to exactly the correct width, and I can open 6 columns worth of editor across my 2 monitors without fidgety resizing. There’s also the readability aspect. Long lines are harder to read. Ever notice how newspapers are really wide, but they never use all that horizontal space? Humans aren’t good at tracking across very long lines. Of course there are exceptions, and sometimes you need long lines for your big table initializations. Nobody shoots style guide traitors these days, as long as they have good reasons. But given that there should be a hard limit on columns for normal code, 80 is a pretty good number. The example of wasting space on the else statement is really odd, mixing keeping the opening bracket on the same line as if, but not on the same line as else. Is that actually a style in use anywhere? I like braces to follow what is done for the actual function itself. So, if I’ve got: void MyFunction() { // do stuff } …then I’m going to do the same with every block inside the function (that is, put braces on separate lines). Above, someone suggested they like to indent their braces — that only looks good if you indent the braces for your function as well. Otherwise when you get to the end of your function, you will have a gap in closing braces that looks like you missed closing a block. That is: void MyFunction() { ….if (something) ……..{ ……..if (something else) …………{ …………//do something …………} ……..} // this line looks like it should have a brace // with this kind of brace-indenting. } I’m not advocating this, but I’ve seen the following rather odd style of saving vertical space: I’m all for saving vertical space, but that style of closing nested blocks in a single line just looks weird. Maybe the people who wrote that were lisp programmers? In lisp, the idiomatic style is to have all your closing parentheses on one line. But LISP, as far as I know, doesn’t have a style illustrated above in which the order of the braces is inverted from the nesting position. The brace that’s indented to look like it closes the outer block syntactically closes the innermost block. With Lisp people usually do what Piflik mentions below. What you’re showing has the advantage that if you’re unsure if you have the correct amount of parentheses, and the editor doesn’t highlight the respective ones, you can just do a vertical comparison if any has been missed. That is, instead of counting them all. And since Lisp has the opening parentheses before the function/whatever they’re easy to spot for comparison. Of course Common Lisp is a bit different from most other languages in that it’s normal to get huge piles of closing parentheses even in small programs. So it makes sense to try to shove them all in a single line. I wouldn’t do it in C or C-like languages though. You should be glad whoever did this at least had tabs between the closing braces…I once saw something like this: if (condition) { for (i = 0; i < n; ++i) { /* do something here */ if (some_result_condition) { /* do something else */ }}} That actually makes more sense because it doesn’t confuse indentation position with brace order; it’s more like nested parentheses in a mathematical expression. Throwing in tab positioning introduces weirdness because the visual position of the brace doesn’t correspond with the block that the brace actually closes. Without any question for me, the very worst indentation policy I know of is Java‘s: * Indent once? 4 spaces. * Indent twice? 8 spaces or a tab that must be set to 8 spaces! Otherwise, about the style of putting elsetake 3 lines, I worked in a company (of 4) where that was the policy, because about half of the team said they could not easily read the style where elsetakes 1 line (and as it was a very new company, screen space was already a non-issue). I do not think that tabs are superior – that is, tabs have the advantage of making indentation flexible for the reader at the cost of the author’s intended indent, which is fine for “regular” indentation, but falls apart in more complex cases. You already mentioned non-leading formatting; I will take the example of a function definition that must otherwise be line-wrapped: Here, you see that by using spaces, I can use my preferred style for long function parameter lists, which I think reads pretty naturally. You can achieve the same effect with tabs, but it means that you have to mix tabs and spaces (and in the right proportion) in order to achieve the same indentation reliably. You can get similar problems e.g. with long array initializers in C++. Ugh, I just told my story about a C dev who did this exact thing. I’ve been working in Java for years, and I’ve never even known they had this as a convention, nor worked anywhere it was actually used. This convention was used at Sun, I saw it only when stepping into the JRE’s source code. As I did not have my tabs set to 8 spaces, it was always extremely painful to read. It also looks like that recent releases of Java source code have cleaned that up to adopt an “all spaces” approach. Why exactly does it matter how far the second half of your function definition is indented? I don’t get that at all. It just has to be obvious that it’s not actually a new line of code. Give it a double indent and move on? Yay, VBScript! No worrying about brackets. It’s like Shamus is doing a whole series on reasons it’s nice to not be a “real” programmer… I “mostly agree” with the id doc, though I strongly disagree on leading tabs – the “ability to change tabstop” can only lead to pain when there are mixed tabs and spaces (see neothoron’s example above about long argument lists) whitespace around operators, ESPECIALLY the ! operator, which has a great habit of hiding when right next to other tokens. Comma operator is an exception: space after, but not before space inside parens – all of them, except when nested and clarity can be improved by tightening groups (though I’d rather recode) space after language operators like if, while and do, but not after function names (the idea being to make it clearer that it’s a function) coddled elses, of course. one thing I’d add is: handle exceptions early and return, rather than indenting your function 500 columns since I don’t do C++, several of the other conventions are meaningless to me. function foo( arg1, arg2 ) { ….if ( condition1 && ! condition2 ) { ……..code ……..return; ….} ….no “else” needed anymore, and thus we save a block level } bar = foo( arg1, arg2 ); Shamus, have you read Code Complete ()? It outlines WHY it’s good to use specific whitespace constraints in a team environment. Also, it’s just a great read. I like my braces on a separate line so I can easily track the start and end on a block of code, and I put whitespace around EVERYTHING in parentheses. I think putting the braces on the same line was so books could publish code and actually have it fit on a page, instead of sprawling over 2 or 3 pages. You only have so much screen. Modern screens are big, but once you have space for your compiler output/run output, and a few editor windows open for the several related pieces of code for your current problem, each individual code window isn’t that big, and the more code you can cram in there, the more you can quickly scan. That’s why I err toward the more dense end. Yes, modern screens are big, sadly they’re big on the wrong axis. I absolutely despise 16:9 screens. I have no idea what I’ll do when this 1280×1024 17-inch LCD dies. :-( I’d probably look for a 1600×1200 replacement, but those were super-expensive as well last time I tried to find one, plus I don’t know if this desk has room for it since they’re quite a bit wider and there are shelves to the left and right of the current LCD. But 5:4 is reasonable, and 4:3 is best. 8:5 (or “16:10” for people who can’t divide) and 16:9 are just annoying. I’ve got a great idea; let’s destroy a bunch of rows of pixels, to make the aspect ratio match theater screens, when that matching isn’t even required (since, first, not all DVDs are “widescreen”, and second, I watch them in a window anyway since I’m usually doing something else too). Also, get off my lawn. :-P 16:9 (16:10 is completely ridiculous) screen ratio has nothing to do with cinema (which is even flatter at 2,39/1), and more with ergonomics. It may be less “common” and it is, of course, annoying for some things (if you read a LOT of A4/letter paperwork on screen it may be easier to flip your screen around), but it fits our field of view (16/9 is 1.78, our field of view is about 1.75). For anyone using any application full screen, or in other ways using their whole screen, 16/9 is far easier on the eyes and neck than 4/3. Coding convention – and our mind’s problems with parsing – come from coding on long thin strips of paper, and reading long lists. Homo Sapiens worked in 3D, Sapiens Sapiens works almost exclusively in 2D (we get confused when we have to do hard 3D stuff), and more and more you see people using one dimension to the near-exclusion of the other one, even at that. Strictly speaking, a child reared to do it that way would be better at parsing text using both dimensions to the fullest, than one who’s used to “our” way would be at parsing “our” code. Anyway, I can nag, but for my work I use 5 displays: 4x 4/3 next ot each other, and a 16/9 on top of the two central screens…. I actually have a 16:10 monitor. (1920 x 1200 native.) I gather it’s sort of exotic, and 16:9 is considered the standard now. I had no idea when I bought it. Now it’s a bit annoying because some software is a bit dumb about adjusting to it. Games never pick my native mode, but usually detect that it’s NOT 16:9 and so assume 4:3. This ends up with many games starting up in some screwy mode where everything is boxed in, stretched, and off-center. RE Braces: I use a separate line for both the opening and closing. For the opening brace, at some point I developed a distaste for shortcuts in coding, and began to prefer visual clarity. I like my code blocks to be distinct from the code around them. As for the closing brace, I don’t mind seeing } else lines, but the else will disappear when I collapse the preceding block so it isn’t an option. Indeed, I find any discussion on braces and code blocks to be incomplete without consideration for the role collapsing plays in reading and writing code. #define CONSTANT int main() { int MainVar; MainVar= input_function(); if (MainVar>= 3) { MainVar= MainVar+ CONSTANT; } else { MainVar= MainVar- CONSTANT; } return 0; } int input_function() { int ReturnValue; scanf(“%i”,ReturnValue); return ReturnValue; } How is this from a readablity perspective? I’m new to most of the stuff where other people actually have to read the code. Not good. Lack of indentation led me to assume that was a single function in there until I read all the way through it. The problem with not indenting is less obvious for smaller blocks of code than more complex. I indented it, but it seems to have been eaten by the internet goat. Use [code] and [/code] (switch [] for ) to keep your whitespace. Two bugs I can see. The first: #define CONSTANT …needs to have a value, otherwise when it’s used later: MainVar= MainVar+ CONSTANT; the preprocessor will turn this into: MainVar= MainVar+ ; and the compiler will barf. If this gets fixed, then you’ll get a warning from gcc, and a crash when the program runs, from: scanf(“%i”,ReturnValue); because scanf needs a pointer to the variable it’s going to read into. ReturnValue can cast to a pointer (in C) since it’s an int, but it has a random value, so you’ll be overwriting a random address based on what scanf parses out of stdin. Should be: scanf(“%i”, &ReturnValue); (modulo whitespace). Sorry, nitpickery done. The formatting is … eh, whatever, it’s not worth discussing all that much I don’t think. :-) Wow, I thought I understood the post, but I can’t understand a lot of what’s in the comments. So, at my previous jobs, one of the developers who had been there longest had the worst indent style ever. See, vim offers, for no sane reason I can think of, a mixed space-and-tab indentation feature. The senior dev used that, and had every tab keypress insert four spaces, and a tab character used every eight. For the non-developers, this means that, for anyone with what is often the default, where every tab character is four spaces, when he would write: ….if (foo == 0) { ……..while (bar) { …………/* something something */ ……..} ….} It would appear to others as: ….if (foo == 0) { ….while (bar) { ……../* something something */ ….} ….} That’s assuming he actually tabbed things consistently, which he frequently did not, so code like: ….if (foo == 0) { ………..while (bar) { …./* something something */ ….} …………} …was equally likely. Thankfully my primary job was java, and I did not have to touch his code often. About else statements: They need to be separated from the close-brace of the if so I can see them; but they must have their own open-brace on the same line as the else, so as not to take up three lines. Why is this obvious and completely correct option not considered in your post? Two lines is the perfect solution; all else is heresy! You set up a false dichotomy between the unreadable one-line and the space-wasting three-line extremes. Clearly the only fair solution is to split the difference and use two! Anyway. That’s the way I like to write them. It’s consistent in that it treats the else and the if the same way, giving both a separate line with an open-brace. It also treats their respective close-braces the same, both on a single line. If I’m scanning a block of code, that single brace jumps out at the eye. Testing: Testing again, but now logged out: Hey! It works! The good new is: The plugin that lets me type raw code into a post also works in the comments, even for logged out users. The bad news is: This plugin runs on each and every comment to the site, which might be what’s slowing down posting. I’m going to have a good think over the implications of this. In the meantime, if you want to enter code of your own, place the code inside of: <pre lang=”c”> Type your code here! </pre> For the “lang” parameter: It supports a LOT of languages. Check here for the full list: Ah. I notice the less than symbols get turned into a literal & lt; That sucks. So, this plugin is running on all comments, but it can’t work right because of the way WordPress itself cleans submitted comments. Testing again, growmap disabled, wp-syntax enabled. Growmap, BB, and dice roller disabled. All plugins disabled. Alas, my testing didn’t pan out. I thought that perhaps the extreme slowdowns was being caused by infighting between plugins, but its not. So that mystery remains. Also, in the 4 minutes that growmap (the anti-spam checkbox) was down, I got 14 spam. Kind of amazing to realize those stupid bots are ALWAYS there, always flinging crap at my site. I suppose I could ban them by IP, but that always feels like an imprecise solution. I have a longish IP banlist (~140 addresses and ranges) of addresses that have tried to hack into my website/messageboard over the years. I’ve always felt that it would be more useful if shared among other website administrators. Of course, the list of bots is always growing. Maybe you could store the IPs that fail the “growmap” test lots of times in a short period and then auto-ban their IPs? Appending the IP to .htaccess is trivial. But of course that kind of thing is open to abuse. Like pulling weeds in a garden, taking care of trolls and spammers is a never-ending task for a webministrator. Good luck! Could it be the nested comment system and the color system that’s slowing things down? I’ve noticed that if I make a comment that it changes everyone else’s post color. Testing again, with plugin disabled. Growmap+bad behavior disabled. wp-syntax enabled. This post is so close to my heart. And my sanity. Testing with Supercache in place. Ah, joy. I too picked up the coding conventions I use today from the last major project I worked on. It was basically K&R style – braces on their own line for namespaces, classes, and methods, trailing braces for control flow constructs. Omitting the braces was strongly discouraged, though, so I guess (after reading the Wikipedia entry) it was closer to the so called “One True Brace Style”? Before that I did a lot of Java, so naturally gravitated to the Allman style. But I think K&R is better now, it’s a decent balance of whitespace where it matters – the “big stuff” – with more compact lines for the body. As for tabs… I’m a bit strange and use tab characters that indent 3 spaces. 2 is too little, 4 is too many. So, when I started reading your blog, I was a non-coder. You kinda got me into it. Now, I am suddenly studying something with computer science in it, typing in code as homework… Well, right now we are learning Haskell, were the formatting isn’t quite as important, seeing as most programs we have to do are two dozen lines at most. I just wanted to say, Shamus, you really explain programmer-stuff well to non-programmers. No style guild is going to be perfect for everybody. I have found Google C++ style guide to be a reasonable compromise. With comments to explain how a decision was made. It also offers cpplint program to check for correct style! cpplint, and those rules, have saved my bacon a whole bunch of times. I didn’t know we had made those public. Neat. :-) There appears to be this tool called “astyle” that can mangle your code into whatever configuration you find desirable. It seems a bit unnecessary to harange one another about whitespace when you can press Ctrl-Shift-U (or whatever) and have everything formatted just the way you like it regardless of what it looked like five seconds ago. For that matter, why don’t we have compilers that do this automatically when you load up a file? This whole thing feels like it should be a solved problem from a practical standpoint. You mean instead of storing the source code as raw text files use some sort of markup language similar to XML that tells the IDE how things are supposed to be organised? Sounds more trouble than it’s worth, assuming it would end up being useful. Or do you mean have the IDE, on-the-fly, interpret the whole code and show it in different formatting exactly the way the programmer wants it? Sounds like it would put too much strain on the hardware to make it practical. If the re-formatting happens when the file is opened it would likely take quite some time. And this is assuming the re-formatting code can actually make sensible decisions, which I find doubtful. Or literally changing the formatting when loading the files, thus either causing every single formatted line to be reported as “changed” or complicating the comparison software enough so it can tell when the changes shouldn’t do anything? Because that sounds like trouble in the long run. Either you’re a pain on all the other programmers because it takes ages to go through your files for changes or there’s a very real risk the code doing the comparison deciding a change doesn’t do anything when it does and also caused a bug. Any comments from programming gurus would help, but I think this problem is like the common cold. Something that feels like it should have been solved long time ago, but is in fact non-trivial to solve. There are two big problems with just using a tool to automatically reformat code to match your preferences: 1. Your preferences aren’t easily expressed with a tool. Real world code is complex and tends to have the occasional case where deviating from your preferred style is clearer than strict adherance. A tool can’t tell this and will merrily clobber the well formatted code. You might add additional markup to indicate “This is clever, don’t change,” but now you’ve added an additional detail to remember. 2. You absolutely need to ensure that code is in some sort of “canonical” format before adding it to a revision control system. If you don’t, you’ll make a commit that looks like it changed every line of your file, which makes tracking down changes very hard. Some revision control systems can do the formatting automatically, or at least can refuse commits not in the canonical form, but not all can. In particular, modern distributed revision control systems like Git tend to require everyone to be configured correctly to get that sort of behavior, and it’s easy to make a mistake. The reality is that style is nowhere near as important as programmers make it out to be, we just like arguing about it. I’m sure there are some genuinely terrible style guides out there, but most are Good Enough, and when you join a team you suck it up and adapt. I found there are also huge differences between languages. C++ is impossible to auto-format due to its complex templates, macros and general difficulty in parsing, whereas Java can be auto-formatted (Eclipse comes with a shortcut defined) without any issues. You can also export the settings for the formatter, and put it into the SVN. It’s a great feature. I’d just like to throw into the ring what F# does: it does not have braces nor begin/end. Blocks in conditional clauses or loops are defined by intendation, and F# only cares that a given block uses consistent intendation only inside that block (and only spaces are allowed): let someFunction x = if x > 0 then anotherFunction x weirdFunction x else strangeFunction x hilariusFunction x How many characters have been used on this page in order to argue spacing? Oh the irony. I really prefer to have braces on their own line, so that I can see them line up. // Easy To Read if(MightBeTrue) { Do Stuff; } // Harder to Read if(MightBeTrue){ Do Stuff; } In the second example, it’s not obvious at a glance what that closing brace is connected to. In the second example you have to actually READ the code to understand its structure, which drives me crazy. I think else statements deserve to be an exception to this rule, however. //This is just as clear as the alternative. if(MightBeTrue) { DoStuff; }else{ DontDoStuff; } You can still see the structure at a glance, and you don’t waste three lines for nothing. *giggles manically* So I have looked at quite a bit of HTML code, and I didn’t realise what the blank space was for. On some sites that I worked on there would be so much blank space that it would bloat the pages by quite a lot. In these I would do a find and replace, replacing all the double spaces ” ” with one space ” “. In some cases this would reduce the size of the file quite a lot. if (a==b) { do stuff; } else { do something else; } I have never liked extra padding around braces, it doesn’t make sense. Doesn’t make it more readable, it just needlessly bloats all your code, stretching your lines out. And like you, I see code in braces as being encapsulated, like a dingle object. I guess it all depends on how you visualize it I s’pose. I prefer my if’s and else’s on their own line, I don’t understand the }else{ stuff, it looks just wrong. I find it faster to locate an else when it is on it’s own line, rather than being a part of the initial if() statement. Something else that bothers me is needlessly using braces. Code like: if (a==b) c=1; makes more sense than if (a==b) { c=1; } …I just don’t see the sense in putting in the braces. If you can’t figure out at a glance that there are no other lines beyond the c=1; part due the lack of braces than you need help. Having a simple bit of code like that just looks cleaner in my opinion. In the end, what ever style is the most comfortable for you, stick to it, but do try and stick with one style at least and be consistent. P.S: You might want to inform people they can use “& nbsp;” for adding in extra spaces (no space between “&” and “n” though). That is what I did here, I used 3 of them. ;)
http://www.shamusyoung.com/twentysidedtale/?p=18383
CC-MAIN-2017-39
refinedweb
11,956
69.01
Search this blog PhpStorm Links Categories Tags2016 2018: 2019.1 Reworked Imports & Code Cleanup In 2019.1 release, we’ve reworked the inspections and intention actions related to importing namespaces and using FQN. We’ve also added some PHP-specific intentions to the Code Cleanup tool, which will help you automatically run safe transformations on the whole project … Posted in Cool Feature Tagged 2019.1, code cleanup, import, PHP CS Fixer, PHP_CodeSniffer, use Leave a comment Locating Dead Code When facing legacy code, probably the first thing you want to do is clean it up. PhpStorm 2019.1 can help you with this, particularly by finding and removing dead code with the new Unused Declaration inspection. It will carefully analyze … Posted in Cool Feature Tagged 2019.1, inspections 2 Comments … Posted in Cool Feature Tagged 2019.1, refactoring Leave a comment PhpStorm 2019.1.1 Preview 191.6707.42 We … Posted in Cool Feature Tagged 2019.1 6 Comments
https://blog.jetbrains.com/phpstorm/tag/2019-1/
CC-MAIN-2019-22
refinedweb
159
59.19
06 January 2009 20:15 [Source: ICIS news] By Ryan Hickman HOUSTON (ICIS news)--The environmental aspirations of US President-elect Barack Obama may help push up natural gas demand in 2009 even though a weak economy and slowing industrial appetite will weigh on gas prices, sources said. "Demand is down; people are being careful with their money and energy use," said R. Skip Horvath, president of the Natural Gas Supply Association (NGSA). In addition to residential energy users, industrial consumers of natgas – including chemical companies - are also reducing gas consumption because of the tightened lending environment and the ailing economy. "Until that gets resolved we will see that downward pressure on prices," Horvath said. Front-month NYMEX natural gas futures were around $5.85/m Btu as 2008 drew to close, down by around 18% from the end of 2007. The NGSA forecasts that in the next few months natural gas prices will stay flat compared with the 2006-2007 northern hemisphere winter. The poor economy, relatively mild winter weather and a 7.9% rise in gas production throughout the winter season will extend the slide in natural gas prices, the NGSA said. The ?xml:namespace> In December, 3M and Dow Chemical together closed more than 20 plants permanently and temporarily shut down about 180 units. The US Energy Information Administration (EIA) predicts that the Henry Hub spot price will average $6.25/1,000 cubic feet (mcf) in 2009, compared with $9.17/mcf in 2008 and $7.17/mcf in 2007. The administration expects consumption in the residential, commercial and electric power sectors will grow slightly in 2009 while industrial demand will weaken. The worldwide economic downturn should account for shrinking That outlook is in line with the NGSA's prediction that electricity generators' consumption of natural gas – which is relatively cleaner burning than its major competitor, coal – will continue to outpace industrial users, a shift in "That's going to be true this year and there's no turning back," Horvath said. "Natural gas is the choice fuel for power generation now." Horvath said need for natural gas could get an upward jolt if energy legislation pushed by the new Obama administration requires reduced carbon emissions. "Carbon reduction will look to us as an increase in demand," Horvath said. One possibility from the Obama White House and Democrat-controlled Congress is cap-and-trade legislation that would aim to force US greenhouse gas emissions to 80% below 1990 levels by 2050. A move to a cap-and-trade system would encourage electric utilities to switch from coal to natural gas. The electricity industry has gas-fuelled plants that function only during high-demand times of the year. Horvath said such "intermediate peaking units" can be quickly utilised to provide cleaner and faster power than coal or nuclear to the industrial sector. "Electric generation will turn to those intermediate peaking units because it's the only way to lower their carbon footprint," he said. "When we turn to the need for a quick carbon fix...they are sitting there not being used right now." Other gas-favourable measures could be a proliferation of renewable energy sources such as wind and solar power, and requiring public buildings to be more energy efficient. Such alternative sources of electricity would need natural gas as a back-up when there is insufficient sunlight or wind. "To provide a reliable electric delivery system with solar or wind you have to have a back up for that," said John Guy, deputy executive director of the National Petroleum Council, an advisory committee to the Secretary of Energy. "That typically is natural gas. I can't think of another fuel that will be available on that sort of interruptible basis other than natural gas," he said. Obama has named Nobel Prize-winning scientist Steven Chu, an advocate of renewables, as his choice to replace Samuel Bodman as energy secretary. On the supply front, the EIA projects domestic natural gas production to rise 0.9% in 2009, on top of an increase of 5.4% in 2008. Imports of natural gas could also potentially rise, although US prices are making investments in the liquefied natural gas (LNG) sector less attractive. There are only eight US LNG receiving terminals currently operating, but eight more North American terminals have been approved by the US Federal Energy Regulatory Commission and are under construction. Another 21 The EIA predicts that US imports of LNG in 2008 could total about 360bn cubic feet (bcf) and slightly more than 400 bcf in 2009. In the longer term, domestic production could also get a political boost. In October last year, a 27-year-old congressional ban on drilling in 85% of US offshore territories expired. Obama has said expansion of offshore drilling would be considered in his energy plan. "We don't think the Democrats will reinstate the full ban," Horvath said. Most of US natural gas imports come by pipeline from The most intriguing North American pipeline project moved another step forward in August when The 2,744km pipeline is expected to meet political opposition in both countries as the initiative moves forward in
http://www.icis.com/Articles/2009/01/06/9182094/outlook-09-us-carbon-politics-may-lift-natural-gas-demand.html
CC-MAIN-2013-48
refinedweb
862
52.49
Question Bharathsimha Re... · Sep 27, 2021 How to find a global's original namespace ? Potentially mapped from a different namespace go to post Thank you.. this works go to post Thank you for the details.. we have only one global that has node specific mapping while most of them are whole globals and the solution suggested above works. For the single node since our data part varies across namespace it is evident of being sourced locally. Thank you so much. go to post Hi, Not sure if this could help - Just adding here to see if this was tried - Hope you have the SQL Gateway connections setup from Management portal. In STUDIO -> Create a class and then try executing the following inside a Class Method / method ClassMethod DataWarehouseFetch() As %Status { Set Status=$$$OK Set SQL= 7 Set SQL(1)= "SELECT pt.Column1, pt.Column2, pt.Column3, pt.Column4, pt.Column5, pt.Column6, pt.Column7, pt.Column8, pt.Column9, pt.Column10, pt.Column11, tr.Column12, tr.Column13, tr.Column14, te.Column15, te.Column16, te.Column17, te.Column18, te.Column19, te.Column20, rs.Column21, rs.Column22, rs.Column23, re.Column24, re.Column25, re.Column26, tr.Column27, tr.Column28, re.Column29 " Set SQL(2)= "FROM Database1.Table1 tr " Set SQL(3)= "LEFT JOIN Database1.Table2 te on te.Column16 = tr.Column13" Set SQL(4)= "LEFT JOIN Database1.Table3 rs on rs.Column23 = tr.Column28" Set SQL(5)= "LEFT JOIN Database1.Table4 re on re.Column25 = rs.Column22" Set SQL(6)= "LEFT JOIN Database1.Table5 pt on pt.Column6 = re.Column26" Set SQL(7)= "WHERE pt.Column10 = '2018-10-30'" Set Status= Statement.%Prepare(.SQL) If $$$ISERR(Status) $$$ThrowOnError(" PREPARE issue") Set tResults=Statement.%Execute() While (tResults.%Next()){ //w tResults.Column1,! //w tResults.%Get("Column2"),! //w tResults.Get("Column2"),! } Quit Status }
https://community.intersystems.com/user/bharathsimha-reddy-jakka
CC-MAIN-2022-40
refinedweb
297
61.22
A queue/jobs system based on redis-limpyd, a redis orm (sort of) in python Project description redis-limpyd-jobs A queue/jobs system based on redis-limpyd (redis orm (sort of) in python) Where to find it: - Github repository: - Pypi package: - Documentation: Install: Python versions 2.7, and 3.5 to 3.8 are supported (CPython and PyPy). Redis-server versions >= 3 are supported. Redis-py versions >= 3 are supported. Redis-limpyd versions >= 2 are supported. You can still use limpyd-extensions versions < 2 if you need something older than the above requirements. pip install redis-limpyd-jobs Note that you actually need the redis-limpyd-extensions (min v1.0) in addition to redis-limpyd (min v1.2) (both are automatically installed via pypi) How it works redis-limpyd-jobs provides three limpyd models (Queue, Job, Error), and a Worker class. These models implement the minimum stuff you need to run jobs asynchronously: - Use the Job model to store things to do - The Queue model will store a list of jobs, with a priority system - The Error model will store all errors - The Worker class is to be used in a process to go through a queue and run jobs Simple example from limpyd_jobs import STATUSES, Queue, Job, Worker # The function to run when a job is called by the worker def do_stuff(job, queue): # here do stuff with your job pass # Create a first job, name 'job:1', in a queue named 'myqueue', with a # priority of 1. The higher the priority, the sooner the job will run job1 = Job.add_job(identifier='job:1', queue_name='myqueue', priority=1) # Add another job in the same queue, with a higher priority, and a different # identifier (if the same was used, no new job would be added, but the # existing job's priority would have been updated) job2 = Job.add_job(identifier='job:2', queue_name='myqueue', priority=2) # Create a worker for the queue used previously, asking to call the # "do_stuff" function for each job, and to stop after 2 jobs worker = Worker(queues='myqueue', callback=do_stuff, max_loops=2) # Now really run the jobs worker.run() # Here our jobs are done, our queue is empty queue1 = Queue.get_queue('myqueue', priority=1) queue2 = Queue.get_queue('myqueue', priority=2) # nothing waiting print queue1.waiting.lmembers(), queue2.waiting.lmembers() >> [] [] # two jobs in success (show PKs of jobs) print queue1.success.lmembers(), queue2.success.lmembers() >> ['limpyd_jobs.models.Job:1', 'limpyd_jobs.models.Job:2'] # Check our jobs statuses print job1.status.hget() == STATUSES.SUCCESS >> True print job2.status.hget() == STATUSES.SUCCESS >> True You notice how it works: - Job.add_job to create a job - Worker() to create a worker, with callback argument to set which function to run for each job - worker.run to launch a worker. Notice that you can run as much workers as you want, even on the same queue name. Internally, we use the blpop redis command to get jobs atomically. But you can also run only one worker, having only one queue, doing different stuff in the callback depending on the idenfitier attribute of the job. Workers are able to catch SIGINT/SIGTERM signals, finishing executing the current job before exiting. Useful if used, for example, with supervisord. If you want to store more information in a job, queue or error, or want to have a different behavior in a worker, it’s easy because you can create subclasses of everything in limpyd-jobs, the limpyd models or the Worker class. Models Job A Job stores all needed informations about a task to run. Note: If you want to subclass the Job model to add your own fields, run method, or whatever, note that the class must be at the first level of a python module (ie not in a parent class or function) to work. Job fields identifier A string (InstanceHashField, indexed) to identify the job. When using the (recommended) add_job class method, you can’t have many jobs with the same identifier in a waiting queue. If you create a new job with an identifier while an other with the same is still in the same waiting queue, what is done depends on the priority of the two jobs: - if the new job has a lower (or equal) priority, it’s discarded - if the new job has a higher priority, the priority of the existing job is updated to the higher. In both cases the add_job class method returns the existing job, discarding the new one. A common way of using the identifier is to, at least, store a way to identify the object on which we want the task to apply: - you can have one or more queue for a unique task, and store only the id of an object on the identifier field - you can have one or more queue each doing many tasks, then you may want to store the task too in the identifier field: “task:id” Note that by subclassing the Job model, you are able to add new fields to a Job to store the task and other needed parameters, as arguments (size for a photo to resize, a message to send…) status A string (InstanceHashField, indexed) to store the actual status of the job. It’s a single letter but we provide a class to help using it verbosely: STATUSES from limpyd_jobs import STATUSES print STATUSES.SUCCESS >> "s" When a job is created via the add_job class method, its status is set to STATUSES.WAITING, or STATUSES.DELAYED if it’is delayed by setting delayed_until. When it selected by the worker to execute it, the status passes to STATUSES.RUNNING. When finished, it’s one of STATUSES.SUCCESS or STATUSES.ERROR. An other available status is STATUSES.CANCELED, useful if you want to cancel a job without removing it from its queue. You can also display the full string of a status: print STATUSES.by_value(my_job.status.hget()) >> "SUCCESS" priority A string (InstanceHashField, indexed, default = 0) to store the priority of the job. The priority of a job determines in which Queue object it will be stored. A worker listen for all queues with some names and different priorities, but respecting the priority (reverse) order: the higher the priority, the sooner the job will be executed. We choose to use the “`”higher priority is better” way of doing things to give the possibility to always add a job in a higher priority than any other ones. Directly updating the priority of a job will not change the queue in which it’s stored. But when you add a job via the (recommended) add_job class method, if a job with the same identifier exists, its priority will be updated (only if the new one is higher) and the job will be moved to the higher priority queue. added A string (InstanceHashField) to store the date and time (a string representation of datetime.utcnow()) of the time the job was added to its queue. It’s useful in combination of the end field to calculate the job duration. start A string (InstanceHashField) to store the date and time (a string representation of datetime.utcnow()) of the time the job was fetched from the queue, just before the callback is called. It’s useful in combination of the end field to calculate the job duration. end A string (InstanceHashField) to store the date and time (a string representation of datetime.utcnow()) of the moment the job was set as finished or in error, just after the has finished. It’s useful in combination of the start field to calculate the job duration. tries A integer saved as a string (InstanceHashField) to store the number of times the job was executed. It can be more than one if it was requeued after an error. delayed_until The string representation (InstanceHashField) of a datetime object until when the job may be in the delayed list (a redis sorted-set) of the queue. It can be set when calling add_job by passing either a delayed_until argument, which must be a datetime, or a delayed_for argument, which must be a number of seconds (int or float) or a timedelta object. The delayed_for argument will be added to the current time (datetime.utcnow()) to compute delayed_until. If a job is in error after its execution and if the worker has a positive requeue_delay_delta attribute, the delayed_until field will be set accordingly, useful to retry a erroneous job after a certain delay. queued This field is set to '1' when it’s currently managed by a queue: waiting, delayed, running. This flag is set when calling enqueue_or_delay, and removed by the worker when the job is canceled, is finished with success, or finished with error and not requeued. It’s this field that is checked to test if the same job already exists when add_job is called. cancel_on_error You must be set this field to a True value (don’t forget that Redis stores Strings, so 0 will be saved as "0" so it will be True… so don’t set it to False or 0 if you want a False value: yo can let it empty) if you don’t want the job to be requeued in case of error. Note that if you want to do this for all jobs a a class, you may want to set to True the always_cancel_on_error attribute of this class. Job attributes queue_model When adding jobs via the add_job method, the model defined in this attribute will be used to get or create a queue. It’s set by default to Queue but if you want to update it to your own model, you must subclass the Job model too, and update this attribute. queue_name None by default, can be set when overriding the Job class to avoid passing the queue_name argument to the job’s methods (especially add_job) Note that if you don’t subclass the Job model, you can pass the queue_model argument to the add_job method. always_cancel_on_error Set this attribute to True if you want all your jobs of this class not be be requeued in case of error. If you let it to its default value of False, you can still do it job by job by setting their field cancel_on_error to a True value. Job properties and methods ident (property) The ident property is a string representation of the model + the primary key of the job, saved in queues, allowing the retrieval of the Job. must_be_cancelled_on_error (property) The must_be_cancelled_on_error property returns a Boolean indicating if, in case of error during its execution, the job must NOT be requeued. By default it will be False, but there is to way to change this behavior: - setting the always_cancel_on_error of your job’s class to True. - setting the cancel_on_error field of your job to a True value duration (property) The duration property simply returns the time used to compute the job. The return value is a datetime.timedelta object if the start and end fields are set, or None on the other case. run (method) It’s the main method of the job, the only one you must override, to do some tuff when the job is executed by the worker. The return value of this method will be passed to the job_success of the worker, then, if defined, to the on_success method of the job. By default a NotImplemented error is raised. Arguments: - queue: The queue from which the job was fetched. requeue (method) The requeue method allow a job to be put back in the waiting (or delayed) queue when its execution failed. Arguments: - queue_name=None The queue name in which to save the job. If not defined, will use the job’s class one. If both are undefined, an exception is raised. - priority=None The new priority of the new job. If not defined, the job will keep its actual priority. - delayed_until=None Set this to a datetime object to set the date on which the job will be really requeued. The real delayed_until can also be set by passing the delayed_for argument. - delayed_for=None A number of seconds (as a int, float or a timedelta object) to wait before the job will be really requeued. It will compute the delayed_until field of the job. - queue_model=None The model to use to store queues. By default, it’s set to Queue, defined in the queue_model attribute of the Job model. If the argument is not set, the attribute will be used. Be careful to set it as attribute in your subclass, or as argument in requeue or the default Queue model will be used and jobs won’t be saved in the expected queue model. enqueue_or_delay (method) It’s the method, called in add_job and requeue that will either put the job in the waiting or delayed queue, depending of delayed_until. If this argument is defined and in the future, the job is delayed, else it’s simply queued. This method also set the queued flag of the job to '1'. Arguments: - queue_name=None The queue name in which to save the job. If not defined, will use the job’s class one. If both are undefined, an exception is raised. - priority=None The new priority of the new job. Use the job’s actual one if not defined. - delayed_until=None The date (must be either a datetime object of the string representation of one) until when the job will remain in the delayed queue. It will not be processed until this date. - prepend=False Set to True to add the job at the start of the waiting list, to be the first to be executed (only if not delayed) - queue_model=None The model to use to store queues. See add_job and requeue. on_started (ghost method) This method, if defined on your job model (it’s not there by default, ie “ghost”) is called when the job is fetched by the worker and about to be executed (“waiting” status) Arguments: - queue: The queue from which the job was fetched. on_success (ghost method) This method, if defined on your job model (it’s not there by default, ie “ghost”) is called by the worker when the job’s execution was a success (it did not raise any exception). Arguments: - queue: The queue from which the job was fetched. - result The data returned by the execute method of the worker, which call and return the result of the run method of the job (or the callback provided to the worker) on_error (ghost method) This method, if defined on your job model (it’s not there by default, ie “ghost”) is called by the worker when the job’s execution failed (an exception was raised) Arguments: - queue: The queue from which the job was fetched. - exception: The exception that was raised during the execution. - traceback: The traceback at the time of the exception, if the save_tracebacks attribute of the worker was set to True on_skipped (ghost method) This method, if defined on your job model (it’s not there by default, ie “ghost”) is called when the job, just fetched by the worker, could not be executed because of its status, not “waiting”. Another possible reason is that the job was canceled during its execution (by settings its status to STATUSES.CANCELED) - queue: The queue from which the job was fetched. on_requeued (ghost method) This method, if defined on your job model (it’s not there by default, ie “ghost”) is called by the worker when the job failed and has been requeued by the worker. - queue: The queue from which the job was fetched. on_delayed (ghost method) This method, if defined on your job model (it’s not there by default, ie “ghost”) is called by the worker when the job was delayed (by settings its status to STATUSES.DELAYED) during its execution (note that you may also want to set the delayed_until of the job value to a correct one datetime (a string represetation of an utc datetime), or the worker will delay it for 60 seconds). It can also be called if the job’s status was set to STATUSES.DELAYED while still in the waiting list of the queue. - queue: The queue from which the job was fetched. Job class methods add_job The add_job class method is the main (and recommended) way to create a job. It will check if a job with the same identifier already exists in a queue (not finished) and if one is found, update its priority (and move it in the correct queue). If no existing job is found, a new one will be created and added to a queue. Arguments: - identifier The value for the identifier field. - queue_name=None The queue name in which to save the job. If not defined, will use the class one. If both are undefined, an exception is raised. - priority=0 The priority of the new job, or the new priority of an already existing job, if this priority is higher of the existing one. - queue_model The model to use to store queues. By default, it’s set to Queue, defined in the queue_model attribute of the Job model. If the argument is not set, the attribute will be used. Be careful to set it as attribute in your subclass, or as argument in add_job or the default Queue model will be used and jobs won’t be saved in the expected queue model. - prepend=False By default, all new jobs are added at the end of the waiting list (and taken from the start, it’s a fifo list), but you can force jobs to be added at the beginning of the waiting list to be the first to be executed, simply by setting the prepend argument to True. If the job already exists, it will be moved at the beginning of the list. - delayed_until=None Set this to a datetime object to set the job to be executed in the future. If defined and in the future, the job will be added to the delayed list (a redis sorted-set) instead of the waiting one. The real delayed_until can also be set by passing the delayed_for argument. - delayed_for=None A number of seconds (as a int, float or a timedelta object) to wait before adding the job to the waiting list. It will compute the delayed_until field of the job. If you use a subclass of the Job model, you can pass additional arguments to the add_job method simply by passing them as named arguments, they will be save if a new job is created (but not if an existing job is found in a waiting queue) get_model_repr Returns the string representation of the model, used to compute the ident property of a job. get_from_ident Returns a job from a string previously got via the ident property of a job. Arguments: - ident A string including the modele representation of a job and it’s primary key, as returned by the ident property. Queue A Queue stores a list of waiting jobs with a given priority, and keep a list of successful jobs and ones on error. Queue fields name A string (InstanceHashField, indexed), used by the add_job method to find the queue in which to store it. Many queues can have the same names, but different priorities. This name is also used by a worker to find which queues it needs to wait for. priority A string (InstanceHashField, indexed, default = 0), to store the priority of a queue’s jobs. All jobs in a queue are considered having this priority. It’s why, as said for the property fields of the Job model, changing the property of a job doesn’t change its real property. But adding (via the add_job class method of the Job model) a new job with the same identifier for the same queue’s name can update the job’s priority by moving it to another queue with the correct priority. As already said, the higher the priority, the sooner the jobs in a queue will be executed. If a queue has a priority of 2, and another queue of the same name has a priority of 0, or 1, all jobs in the one with the priority of 2 will be executed (at least fetched) before the others, regardless of the number of workers. waiting A list (ListField) to store the primary keys of job in the waiting status. It’s a fifo list: jobs are appended to the right (via rpush), and fetched from the left (via blpop) When fetched, a job from this list is executed, then pushed in the success or error list, depending if the callback raised an exception or not. If a job in this waiting list is not in the waiting status, it will be skipped by the worker. success A list (ListField) to store the primary keys of jobs fetched from the waiting list and successfully executed. error A list (ListField) to store the primary keys of jobs fetched from the waiting list for which the execution failed. delayed A sorted set (SortedSetField) to store delayed jobs, ones having a delayed_until datetime in the future. The timestamp representation of the delayed_until field is used as the score for this sorted-set, to ease the retrieval of jobs that are now ready. Queue attributes The Queue model has no specific attributes. Queue properties and methods first_delayed (property) Returns a tuple representing the first job to be ready in the delayed queue. It’s a tuple with the job’s pk and the timestamp representation of it’s delayed_until value (it’s the score of the sorted_set). Returns None if the delayed queue is empty. first_delayed_time (property) Return the timestamp representation of the first delayed job to be ready, or None if the delayed queue is empty. delay_job (method) Put a job in the delayed queue. Arguments: - job The job to delay. - delayed_until A datetime object specifying when the job should be put back in the waiting queue. It will be converted into a timestamp used as the score of the delayed list, which is a redis sorted-set. enqueue_job (method) Put a job in the waiting list. Arguments: - job The job to enqueue. - prepend=False Set to True to add the job at the start of the waiting list, to be the first to be executed. requeue_delayed_jobs (method) This method will check for all jobs in the delayed queue that are now ready to be executed and put them back in the waiting list. This method will return the list of failures, each failure being a tuple with the value returned by the ident property of a job, and the message of the raised exception causing the failure. Not that the status of the jobs is changed only if their status was STATUSES.DELAYED. It allows to cancel a delayed job before. Queue class methods get_queue The get_queue class method is the recommended way to get a Queue object. Given a name and a priority, it will return the found queue or create a queue if no matching one exist. Arguments: - name The name of the queue to get or create. - priority The priority of the queue to get or create. If you use a subclass of the Queue model, you can pass additional arguments to the get_queue method simply by passing them as named arguments, they will be saved if a new queue is created (but not if an existing queue is found) get_waiting_keys The get_waiting_keys class method returns all the existing (waiting) queues with the given names, sorted by priority (reverse order: the highest priorities come first), then by names. The returned value is a list of redis keys for each waiting lists of matching queues. It’s used internally by the workers as argument to the blpop redis command. Arguments: - names The names of the queues to take into accounts (can be a string if a single name, or a list of strings) count_waiting_jobs The count_waiting_jobs class method returns the number of jobs still waiting for the given queue names, combining all priorities. Arguments: - names The names of the queues to take into accounts (can be a string if a single name, or a list of strings) count_delayed_jobs The count_delayed_jobs class method returns the number of jobs still delayed for the given queue names, combining all priorities. Arguments: - names The names of the queues to take into accounts (can be a string if a single name, or a list of strings) get_all The get_all class method returns a list of queues for the given names. Arguments: - name The names of the queues to take into accounts (can be a string if a single name, or a list of strings) get_all_by_priority The get_all_by_priority class method returns a list of queues for the given names, ordered by priorities (the highest priority first), then names. Arguments: - name The names of the queues to take into accounts (can be a string if a single name, or a list of strings) Error The Error model is used to store errors from the jobs that are not successfully executed by a worker. Its main purpose is to be able to filter errors, by queue name, job model, job identifier, date, exception class name or code. You can use your own subclass of the Error model and then store additional fields, and filter on them. Error fields job_model_repr A string (InstanceHashField, indexed) to store the string representation of the job’s model. job_pk A string (InstanceHashField, indexed) to store the primary key of the job which generated the error. idenfitier A string (InstanceHashField, indexed) to store the identifier of the job that failed. queue_name A string (InstanceHashField, indexed) to store the name of the queue the job was in when it failed. date_time A string (InstanceHashField, indexed with SimpleDateTimeIndex) to store the date and time (to the second) of the error (a string representation of datetime.utcnow()). This field is indexed so you can filter errors by date and time (string mode, not by parts of date and time, ie date_time__gt='2017-01-01'), useful to graph errors. date DEPRECATED: this is replaced by date_time but kept for now for compatibility A string (InstanceHashField, indexed) to store the date (only the date, not the time) of the error (a string representation of datetime.utcnow().date()). This field is indexed so you can filter errors by date, useful to graph errors. time DEPRECATED: this is replaced by date_time but kept for now for compatibility A string (InstanceHashField) to store the time (only the time, not the date) of the error (a string representation of datetime.utcnow().time()). type A string (InstanceHashField, indexed) to store the type of error. It’s the class’ name of the originally raised exception. code A string (InstanceHashField, indexed) to store the value of the code attribute of the originally raised exception. Nothing is stored here if there is no such attribute. A string (InstanceHashField) to store the string representation of the originally raised exception. traceback A string (InstanceHashField) to store the string representation of the traceback of the originally raised exception (the worker may not have filled it) Error properties and methods datetime This property returns a datetime object based on the content of the date_time field of an Error object. Error class methods add_error The add_error class method is the main (and recommended) way to add an entry on the Error model, by accepting simple arguments that will be break down (job becomes identifier and job_pk, when becomes date and time, error becomes code and message) Arguments: queue_name The name of the queue the job came from. job The job which generated the error, from which we’ll extract job_pk and identifier error An exception from which we’ll extract the code and the message. when=None A datetime object from which we’ll extract the date and time. If not filled, datetime.utcnow() will be used. trace=None The traceback, stringyfied, to store. If you use a subclass of the Error model, you can pass additional arguments to the add_error method simply by passing them as named arguments, they will be save in the object to be created. collection_for_job The collection_for_job is a helper to retrieve the errors assiated with a given job, more precisely for all the instances of this job with the same identifier. The result is a limpyd collection, to you can use filter, instances… on it. Arguments: - job The job for which we want errors The worker(s) The Worker class The Worker class does all the logic, working with Queue and Job models. The main behavior is: - reading queue keys for the given names - waiting for a job available in the queues - executing the job - manage success or error - exit after a defined number of jobs or a maximum duration (if defined), or when a SIGINT/SIGTERM signal is caught The class is split in many short methods so that you can subclass it to change/add/remove whatever you want. Constructor arguments and worker’s attributes Each of the following worker’s attributes can be set by an argument in the constructor, using the exact same name. It’s why the two are described here together. queues Names of the queues to work with. It can be a list/tuple of strings, or a string with names separated by a comma (no spaces), or without comma for a single queue. Note that all queues must be from the same queue_model. Default to None, but if not set and not defined in a subclass, will raise an LimpydJobsException. queue_model The model to use for queues. By default it’s the Queue model included in limpyd_jobs, but you can use a subclass of the default model to add fields, methods… error_model The model to use for saving errors. By default it’s the Error model included in limpyd_jobs, but you can use a subclass of the default model to add fields, methods… logger_name limpyd_jobs uses the python logging module, so this is the name to use for the logger created for the worker. The default value is LOGGER_NAME, with LOGGER_NAME defined in limpyd_jobs.workers with a value of “limpyd-jobs”. logger_level It’s the level set for the logger created with the name defined in logger_name, default to logging.INFO. save_errors A boolean, default to True, to indicate if we have to save errors in the Error model (or the one defined in error_model) when the execution of the job is not successful. save_tracebacks A boolean, default to True, to indicate if we have to save the tracebacks of exceptions in the Error model (or the one defined in error_model) when the execution of the job is not successful (and only if save_errors is True) max_loops The max number of loops (fetching + executing a job) to do in the worker lifetime, default to 1000. Note that after this number of loop, the worker ends (the run method cannot be executed again) The aim is to avoid memory leaks become too important. max_duration If defined, the worker will end when its run method was called for at least this number of seconds. By default it’s set to None, saying there is no maximum duration. terminate_gracefully To avoid interrupting the execution of a job, if terminate_gracefully is set to True (the default), the SIGINT and SIGTERM signals are caught, asking the worker to exit when the current jog is done. callback The callback is the function to run when a job is fetched. By default it’s the execute method of the worker (which calls the run method of jobs, which, if not overridden, raises a NotImplemented error) , but you can pass any function that accept a job and a queue as argument. Using the queue’s name, and the job’s identifier+model (via job.ident), you can manage many actions depending on the queue if needed. If this callback (or the execute method) raises an exception, the job is considered in error. In the other case, it’s considered successful and the return value is passed to the job_success method, to let you do what you want with it. timeout The timeout is used as parameter to the blpop redis command we use to fetch jobs from waiting lists. It’s 30 seconds by default but you can change it to any positive number (in seconds). You can set it to 0 if you don’t want any timeout be applied to the blpop command. It’s better to always set a timeout, to reenter the main loop and call the must_stop method to see if the worker must exit. Note that the number of loops is not updated in the case of the timeout occurred, so a little timeout won’t alter the number of loops defined by max_loops. fetch_priorities_delay The fetch_priorities_delay is the delay between two fetches of the list of priorities for the current worker. If a job was added with a priority that did not exist when the worker run was started, it will not be taken into account until this delay expires. Note that if this delay is, say, 5 seconds (it’s 25 by default), and the timeout parameter is 30, you may wait 30 seconds before the new priority fetch because if there is no jobs in the priority queues actually managed by the worker, the time is in the redis hands. fetch_delayed_delay The fetch_delayed_delay is the delay between two fetches of the delayed jobs that are now ready in the queues managed by the worker. Note that if this delay is, say, 5 seconds (it’s 25 by default), and the timeout parameter is 30, you may wait 30 seconds before the new delayed fetch because if there is no jobs in the priority queues actually managed by the worker, the time is in the redis hands. requeue_times It’s the number of times a job will be requeued when its execution results in a failure. It will then be put back in the same queue. This attribute is 0 by default so by default a job won’t be requeued. requeue_priority_delta This number will be added to the current priority of the job that will be requeued. By default it’s set to -1 to decrease the priority at each requeue. requeue_delay_delta It’s a number of seconds to wait before adding back an erroneous job in the waiting queue, set by default to 30: when a job failed to execute, it’s put in the delayed queue for 30 seconds then it’ll be put back in the waiting queue (depending on the fetch_delayed_delay attribute) Other worker’s attributes In case on subclassing, you can need these attributes, created and defined during the use of the worker: keys A list of keys of queues waiting lists, which are listened by the worker for new jobs. Filled by the update_keys method. status The current status of the worker. None by default until the run method is called, after what it’s set to "starting" while getting for an available queue. Then it’s set to "waiting" while the worker waits for new jobs. When a job is fetched, the status is set to "running". And finally, when the loop is over, it’s set to "terminated". If the status is not None, the run method cannot be called. logger The logger (from the logging python module) defined by the set_logger method. num_loops The number of loops done by the worker, incremented each time a job is fetched from a waiting list, even if the job is skipped (bad status…), or in error. When this number equals the max_loops attribute, the worker ends. end_forced When True, ask for the worker to terminate itself after executing the current job. It can be set to True manually, or when a SIGINT/SIGTERM signal is caught. end_signal_caught This boolean is set to True when a SIGINT/SIGTERM is caught (only if the terminate_gracefully is True) start_date None by default, set to datetime.utcnow() when the run method starts. end_date None by default, set to datetime.utcnow() when the run method ends. wanted_end_date None by default, it’s computed to know when the worker must stop based on the start_date and max_duration. It will always be None if no max_duration is defined. connection It’s a property, not an attribute, to get the current connection to the redis server. parameters It’s a tuple holding all parameters accepted by the worker’s constructor parameters = ('queues', 'callback', 'queue_model', 'error_model', 'logger_name', 'logger_level', 'save_errors', 'save_tracebacks', 'max_loops', 'max_duration', 'terminate_gracefuly', 'timeout', 'fetch_priorities_delay', 'fetch_delayed_delay', 'requeue_times', 'requeue_priority_delta', 'requeue_delay_delta') Worker’s methods As said before, the Worker class in spit in many little methods, to ease subclassing. Here is the list of public methods: __init__ Signature: def __init__(self, queues=None, **kwargs): Returns nothing. It’s the constructor (you guessed it ;) ) of the Worker class, expecting all arguments (defined in parameters) that can also be defined as class attributes. It validates these arguments, prepares the logging and initializes other attributes. You can override it to add, validate, initialize other arguments or attributes. handle_end_signal Signature: def handle_end_signal(self): Returns nothing. It’s called in the constructor if terminate_gracefully is True. It plugs the SIGINT and SIGTERM signal to the catch_end_signal method. You can override it to catch more signals or do some checked before plugging them to the catch_end_signal method. stop_handling_end_signal Signature: def stop_handling_end_signal(self): Returns nothing. It’s called at the end of the run method, as we don’t need to catch the SIGINT and SIGTERM signals anymore. It’s useful when launching a worker in a python shell to finally let the shell handle these signals. Useless in a script because the script is finished when the run method exits. set_logger Signature: def set_logger(self): Returns nothing. It’s called in the constructor to initialize the logger, using logger_name and logger_level, saving it in self.logger. must_stop Signature: def must_stop(self): Returns boolean. It’s called on the main loop, to exit it on some conditions: an end signal was caught, the max_loops number was reached, or end_forced was set to True. wait_for_job Signature: def wait_for_job(self): Returns a tuple with a queue and a job This method is called during the loop, to wait for an available job in the waiting lists. When one job is fetched, returns the queue (an instance of the model defined by queue_model) on which the job was found, and the job itself. get_job Signature: def get_job(self, job_ident): Returns a job. Called during wait_for_job to get a real job object based on the job’s ident (model + pk) fetched from the waiting lists. get_queue Signature: def get_queue(self, queue_redis_key): Returns a Queue. Called during wait_for_job to get a real queue object (an instance of the model defined by queue_model) based on the key returned by redis telling us in which list the job was found. This key is not the primary key of the queue, but the redis key of it’s waiting field. catch_end_signal Signature: def catch_end_signal(self, signum, frame): Returns nothing. It’s called when a SIGINT/SIGTERM signal is caught. It’s simply set end_signal_caught and end_forced to True, to tell the worker to terminate as soon as possible. execute Signature: def execute(self, job, queue): Returns nothing by default. This method is called if no callback argument is provided when initiating the worker and call the run method of the job, which raises a NotImplementedError by default. If the execution is successful, no return value is attended, but if any, it will be passed to the job_success method. And if an error occurred, an exception must be raised, which will be passed to the job_error method. update_keys Signature: def update_keys(self): Returns nothing. Calling this method updates the internal keys attributes, which contains redis keys of the waiting lists of all queues listened by the worker. It’s actually called at the beginning of the run method, and at intervals depending on fetch_priorities_delay. Note that if a queue with a specific priority doesn’t exist when this method is called, but later, by adding a job with add_job, the worker will ignore it unless this update_keys method was called again (programmatically or by waiting at least fetch_priorities_delay seconds) run Signature: def run(self): Returns nothing. It’s the main method of the worker, with all the logic: while we don’t have to stop (result of the must_stop method), fetch a job from redis, and if this job is really in waiting state, execute it, and do something depending of the status of the execution (success, error…). In addition to the methods that do real stuff (update_keys, wait_for_job), some other methods are called during the execution: run_started, run_ended, about the run, and job_skipped, job_started, job_success and job_error about jobs. You can override these methods in subclasses to adapt the behavior depending on your needs. run_started Signature: def run_started(self): Returns nothing. This method is called in the run method after the keys are computed using update_keys, just before starting the loop. By default it does nothing but a log.info. run_ended Signature: def run_ended(self): Returns nothing. This method is called just before exiting the run method. By default it does nothing but a log.info. job_skipped Signature: def job_skipped(self, job, queue): Returns nothing. When a job is fetched in the run method, its status is checked. If it’s not STATUSES.WAITING, this job_skipped method is called, with two main arguments: the job and the queue in which it was found. This method is also called when the job is canceled during its execution (ie if, when the execution is done, the job’s status is STATUSES.CANCELED). This method remove the queued flag of the job, logs the message returned by the job_skipped_message method, then call, if defined, the on_skipped method of the job. job_skipped_message Signature: def job_skipped_message(self, job, queue): Returns a string to be logged in job_skipped. job_started Signature: def job_started(self, job, queue): Returns nothing. When the job is fetched and its status verified (it must be STATUSES.WAITING), the job_started method is called, just before the callback (or the execute method if no callback is defined), with the job and the queue in which it was found. This method updates the start and status fields of the job, then log the message returned by job_started_message and finally call, if defined, the on_started method of the job. job_started_message Signature: def job_started_message(self, job, queue): Returns a string to be logged in job_started. job_success Signature: def job_success(self, job, queue, job_result): Returns nothing. When the callback (or the execute method) is finished, without having raised any exception, the job is considered successful, and the job_success method is called, with the job and the queue in which it was found, and the return value of the callback method. Note that this method is not called, and so the job not considered a “success” if, when the execution is done, the status of the job is either STATUS.CANCELED or STATUS.DELAYED. In these cases, the methods job_skipped and job_delayed are called respectively. This method remove the queued flag of the job, updates its end and status fields, moves the job into the success list of the queue, then log the message returned by job_success_message and finally call, if defined, the on_success method of the job. job_success_message Signature: def job_success_message(self, job, queue, job_result): Returns a string to be logged in job_success. job_delayed Signature: def job_delayed(self, job, queue): Returns nothing. When the callback (or the execute method) is finished, without having raised an exception, and the status of the job at this moment is STATUSES.DELAYED, the job is not successful but not in error: it will be delayed. Another way to have this method called if its a job is in the waiting queue but its status was set to STATUSES.DELAYED. In this cas, the job is not executed, but delayed by calling this method. This method check if the job has a delayed_until value, and if not, or if an invalid one, it is set to 60 seconds in the future. You may want to explicitly set this value, or at least clear the field because if the job was initially delayed, the value may be set, but in the past, and the job will be delayed to this date, so, not delayed but just queued. With this value, the method enqueue_or_delay of the queue is called, to really delay the job. Then, log the message returned by job_delayed_message and finally call, if defined, the on_delayed method of the job. job_delayed_message Signature: def job_delayed_message(self, job, queue): Returns a string to be logged in job_delayed. job_error Signature: def job_error(self, job, queue, exception, trace=None): Returns nothing. When the callback (or the execute method) is terminated by raising an exception, the job_error method is called, with the job and the queue in which it was found, and the raised exception and, if save_tracebacks is True, the traceback. This method remove the queued flag of the job if it is no to be requeued, updates its end and status fields, moves the job into the error list of the queue, adds a new error object (if save_errors is True), then log the message returned by job_error_message and call the on_error method of the job is called, if defined. And finally, if the must_be_cancelled_on_error property of the job is False, and the requeue_times worker attribute allows it (considering the tries attribute of the job, too), the requeue_job method is called. job_error_message Signature: def job_error_message(self, job, queue, to_be_requeued_exception, trace=None): Returns a string to be logged in job_error. job_requeue_message Signature: def job_requeue_message(self, job, queue): Returns a string to be logged in job_error when the job was requeued. additional_error_fields Signature: def additional_error_fields(self, job, queue, exception, trace=None): Returns a dictionary of fields to add to the error object, empty by default. This method is called by job_error to let you define a dictionary of fields/values to add to the error object which will be created, if you use a subclass of the Error model, defined in error_model. To pass these additional fields to the error object, you have to override this method in your own subclass. requeue_job def requeue_job(self, job, queue, priority, delayed_for=None): Returns nothing. This method is called to requeue the job when its execution failed, and will call the requeue method of the job, then its requeued one, and finally will log the message returned by job_requeue_message. id It’s a property returning a string identifying the current worker, used in logging to distinct log entries for each worker. elapsed It’s a property returning, when running the time elapsed since when the run started. When the run method ends, it’s the time between start_date and end_date. If the run method is not called, it will be set to None. log Signature: def log(self, message, level='info'): Returns nothing. log is a simple wrapper around self.logger, which automatically add the id of the worker at the beginning. It can accepts a level argument which is info by default. set_status Signature: def set_status(self, status): Returns nothing. set_status simply update the worker’s status field. count_waiting_jobs Signature: def count_waiting_jobs(self): Returns the number of jobs in waiting state that can be run by this worker. count_delayed_jobs Signature: def count_delayed_jobs(self): Returns the number of jobs in the delayed queues managed by this worker. The worker.py script To help using limpyd_jobs, an executable python script is provided: scripts/worker.py (usable as limpyd-jobs-worker, in your path, when installed from the package) This script is highly configurable to help you launching workers without having to write a script or customize the one included. With this script you don’t have to write a custom worker too, because all arguments attended by a worker can be passed as arguments to the script. The script is based on a WorkerConfig class defined in limpyd_jobs.workers, that you can customize by subclassing it, and you can tell the script to use your class instead of the default one. You can even pass one or many python paths to add to sys.path. This script is designed to ease you as much as possible. Instead of explaining all arguments, see below the result of the --help command for this script: $ limpyd-jobs-worker --help Usage: worker.py [options] Run a worker using redis-limpyd-jobs Options: --pythonpath=PYTHONPATH A directory to add to the Python path, e.g. --pythonpath=/my/module --worker-config=WORKER_CONFIG The worker config class to use, e.g. --worker- config=my.module.MyWorkerConfig, default to limpyd_jobs.workers.WorkerConfig --print-options Print options used by the worker, e.g. --print-options --dry-run Won't execute any job, just starts the worker and finish it immediatly, e.g. --dry-run --queues=QUEUES Name of the Queues to handle, comma separated e.g. --queues=queue1,queue2 --queue-model=QUEUE_MODEL Name of the Queue model to use, e.g. --queue- model=my.module.QueueModel --error-model=ERROR_MODEL Name of the Error model to use, e.g. --queue- model=my.module.ErrorModel --worker-class=WORKER_CLASS Name of the Worker class to use, e.g. --worker- class=my.module.WorkerClass --callback=CALLBACK The callback to call for each job, e.g. --worker- class=my.module.callback --logger-name=LOGGER_NAME The base name to use for logging, e.g. --logger-base- name="limpyd-jobs.%s" --logger-level=LOGGER_LEVEL The level to use for logging, e.g. --worker-class=ERROR --save-errors Save job errors in the Error model, e.g. --save-errors --no-save-errors Do not save job errors in the Error model, e.g. --no- save-errors --save-tracebacks Save exception tracebacks on job error in the Error model, e.g. --save-tracebacks --no-save-tracebacks Do not save exception tracebacks on job error in the Error model, e.g. --no-save-tracebacks --max-loops=MAX_LOOPS Max number of jobs to run, e.g. --max-loops=100 --max-duration=MAX_DURATION Max duration of the worker, in seconds (None by default), e.g. --max-duration=3600 --terminate-gracefuly Intercept SIGTERM and SIGINT signals to stop gracefuly, e.g. --terminate-gracefuly --no-terminate-gracefuly Do NOT intercept SIGTERM and SIGINT signals, so don't stop gracefuly, e.g. --no-terminate-gracefuly --timeout=TIMEOUT Max delay (seconds) to wait for a redis BLPOP call (0 for no timeout), e.g. --timeout=30 --fetch-priorities-delay=FETCH_PRIORITIES_DELAY Min delay (seconds) to wait before fetching new priority queues, e.g. --fetch-priorities-delay=20 --fetch-delayed-delay=FETCH_DELAYED_DELAY Min delay (seconds) to wait before updating delayed jobs, e.g. --fetch-delayed-delay=20 --requeue-times=REQUEUE_TIMES Number of time to requeue a failing job (default to 0), e.g. --requeue-times=5 --requeue-priority-delta=REQUEUE_PRIORITY_DELTA Delta to add to the actual priority of a failing job to be requeued (default to -1, ie one level lower), e.g. --requeue-priority-delta=-2 --requeue-delay-delta=REQUEUE_DELAY_DELTA How much time (seconds) to delay a job to be requeued (default to 30), e.g. --requeue-delay-delta=15 --database=DATABASE Redis database to use (host:port:db), e.g. --database=localhost:6379:15 --no-title Do not update the title of the worker's process, e.g. --no-title --version show program's version number and exit -h, --help show this help message and exit Except for --pythonpath, --worker-config, --print-options,--dry-run, --worker-class and --no-title, all options will be passed to the worker. So, if you use the default models, the default worker with its default options, and to launch a worker to work on the queue “queue-name”, all you need to do is: limpyd-jobs-worker --queues=queue-name --callback=python.path.to.callback We use the setproctitle module to display useful informations in the process name, to have stuff like this: limpyd-jobs-worker#1566090 [init] queues=foo,bar limpyd-jobs-worker#1566090 [starting] queues=foo,bar loop=0/1000 waiting=10 delayed=0 limpyd-jobs-worker#1566090 [running] queues=foo,bar loop=1/1000 waiting=9 delayed=2 duration=0:00:15 limpyd-jobs-worker#1566090 [terminated] queues=foo,bar loop=10/1000 waiting=0 delayed=0 duration=0:12:27 You can disable it by passing the --no-title argument. Note that if no logging handler is set for the logger-name, a StreamHandler formatter will be automatically added by the script, given logs like: [19122] 2013-10-02 00:51:24,158 (limpyd-jobs) WARNING [038480] [test|job:1] job skipped (current status: SUCCESS) (the format used is "[%(process)d] %(asctime)s (%(name)s) %(levelname)-8s %(message)s") Executing code before loading worker class Sometimes you may want to do some initialization work before even loading the Worker class, for example, using django, to add django.setup() For this, simple override the WorkerConfig class: import django from limpyd_jobs.workers import WorkerConfig class MyWorkerConfig(WorkerConfig): def __init__(self, argv=None): django.setup() super(MyWorkerConfig, self).__init__(argv) And pass the python path to this class using the --worker-config option to the limpyd-jobs-worker script. Tests The redis-limpyd-jobs package is fully tested (coverage: 100%). To run the tests, which are not installed via the setup.py file, you can do: $ python run_tests.py [...] Ran 136 tests in 19.353s OK Or if you have nosetests installed: $ nosetests [...] Ran 136 tests in 20.471s OK The nosetests configuration is provided in the setup.cfg file and include the coverage, if nose-cov is installed. Final words you can see a full example in example.py (in the source, not in the installed package) to use limpyd_jobs models on your own redis database instead of the default one (localhost:6379:db=0), simply use the use_database method of the main model: from limpyd.contrib.database import PipelineDatabase from limpyd_jobs.models import BaseJobsModel database = PipelineDatabase(host='localhost', port=6379, db=15) BaseJobsModel.use_database(database) or simply change the connection settings: from limpyd_jobs.models import BaseJobsModel BaseJobsModel.database.connect(host='localhost', port=6379, db=15) The end. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/redis-limpyd-jobs/
CC-MAIN-2021-10
refinedweb
9,078
60.35
Developer Docs Style Guide This guide serves as an example and quick reference for the syntax and structure of this site. Below are examples of nearly all the available syntax using Markdown, Kramdown (a superset of Markdown), the table-of-contents UI widget, etc. Some portions of this page were adapted from the Kramdown Quick Reference. Conventions Site file naming The naming convention for files - guides, samples, etc - is lowercase with - used as spaces. This leads to more consistent and legible URLs. In addition, Google recommends one construct compound URL names with - and not underbars ( _). For example, consider the name of this guide: “Developer Docs Style Guide”. The file name for this guide is developer-docs-style-guide. Google treats a hyphen as a word separator, but does not treat an underscore that way. Google treats and underscore as a word joiner — so “red_sneakers” is the same as “redsneakers.” In general, when considering new file names for guides, please imagine you are saying “Guide to _____”. This often leads to verbs ending in “-ing”, the progressive or continuous verb tense. Obviously, this is not a hard-and-fast rule, but rather a convention. In general, when considering new file names for samples, please imagine you are saying “_____ sample”. As with guides, this is not a hard-and-fast rule, but rather a general convention. Division of content # Title become H1 headers and are reserved for the title of the page only. ## Header become H2 headers and are reserved for major sections within the page. ### Sub Header become H3 headers and are reserved for sub-sections within a major section. Fonts On Windows, this site attempts to use Segoe UI (font size: 16 px, font weight: 400, line height: 1.6) and falls back to Frutiger Linotype, Dejavu Sans, Helvetica Neue, Helvetica, Arial, in that order. On macOS, the site will (almost) certainly use Helvetica Neue or Helvetica (font size: 16 px, font weight: 300, line height: 1.6). The operating system-specific font weight is set in the footer using javascript. Paths & Filenames *Italics* are used to denote filenames, paths, and file extensions. For example: Navigate to C:\Program Files\Rhinoceros 5 (64-bit)\Plug-ins. Bold **Bold** (strong emphasis) is used in instructions to highlight critical instructions that are very important. Bold should be used sparingly as it is often present in headers as natural division of content. Spelling & Case The following spelling and case conventions are adopted on this site: - “Plugins” is not hyphenated unless it refers to a place in the Rhino UI where it is hyphenated. - “openNURBS” (not OpenNURBS, nor opennurbs, nor oPeNnURBs) unless it refers to a namespace in code where it is capitalized, or a path where it is not. Images & Screenshots When feasible, it is best to use the .svg vector format for images, especially for diagrams. When using bitmap images, the preferred format is .png, but any browser-friendly bitmap format will work. When capturing screenshots, consider that many people have high-DPI (aka: “Retina”) displays. Please capture all screenshots on a high-DPI display. See the Text Modifiers > Images section of this guide for more information on inserting images. Headers Headers demarcate major sections of the page, guide, etc. Headers are created like this: ## Headers The example above is an H2 header. Creating a header automatically creates an #anchor tag in the generated html. For headers with multiple words, Kramdown lowercases all the words and adds dashes for spaces. For example, if we had a header like this: ## All Your Base Are Belong to Us the resulting html anchor tag would be: #all-your-base-are-belong-to-us Sub Headers Sub Headers demarcate sub-sections a major section underneath a header. Sub Headers are created like this: ### Sub Header The example above is an H3 header, which we are calling a “Sub Header.” Just like with H3 headers, H3 headers also create an #anchor tag in the generated html. Table of Contents The UI-widget to the left of this column is a the Table of Contents (TOC) for this page. If you are authoring a page that requires a TOC, you can generate one automatically by using a TOC-enabled layouts (see How This Site Works for more information). To get a TOC-enabled templates to generate the TOC automatically from the H1, H2 and H3 headers. For example, to get the main title to show up in the TOC, you would type this: # The Title To get a Header to show up in the TOC, you would type this: ## Cool Header To get a Sub Header to show up in the TOC, you would type this: ### Sweet Sub Header Note: TOCs are only generated from H1, H2, and H3 headers…H4 (and smaller) headers are ignored by the TOC-enabled templates. Structural Elements Paragraphs Consecutive lines of text are considered to be one paragraph. You must add a blank line between paragraphs. Block Quotes A blockquote is started using the > marker followed by an optional space; all following lines that are also started with the blockquote marker belong to the blockquote. You can use any block-level elements inside a blockquote: > This is a sample block quote > > >Nested blockquotes are also possible. Yields: This is a sample block quote Nested blockquotes are also possible. Code Blocks To create a code block, surround the code with three back-ticks, followed by a language abbreviation. For example: ```cs …followed by the code…; } …and finally closed by three back-ticks. The abbreviation after the first set of back-ticks is the language code for syntax highlighting. We are using a syntax highlighting plugin called highlight.js. Many languages are supported. The most common language abbreviations used on this site are: csis C# vbnetis Visual Basic pythonis Python cppis C/C++ A complete list of language aliases can be found in the individual source files for highlight.js. Line numbering is also available for code blocks. Simply add the {: .line-numbers} tag after the codeblock terminator ```; } Horizontal Rules Horizontal rules (lines) are created by using three dashes: --- You can an example of one of these right here… Lists You can create ordered lists and unordered lists. Ordered Lists Ordered lists are created by typing 1. at the start of a line, like this: This is an ordered list: 1. Item one. 1. Item two. 1. Item three. yields: This is an ordered list: - Item one. - Item two. - Item three. Nested ordered lists are also possible. For example: This is a nested ordered list: 1. Do item one. 1. Item one subtask one. 1. Item one subtask two. 1. Do item two. 1. Do item three. yields: This is a nested ordered list: - Do item one. - Item one subtask one. - Item one subtask two. - Do item two. - Do item three. Unordered Lists Unordered lists (bullet lists) are created using the dash (-) symbol at the beginning of a line: This is a bullet list: - Item one - Item two - Item three yields: This is a bullet list: - Item one - Item two - Item three. Here is the syntax for a simple table: | A simple | table | | with multiple | lines| yields: More complex tables can be added like this: | Header1 | Header2 | Header3 | |:--------|:-------:|--------:| | cell1 | cell2 | cell3 | | cell4 | cell5 | cell6 | |---- | cell1 | cell2 | cell3 | | cell4 | cell5 | cell6 | |===== | Foot1 | Foot2 | Foot3 {: rules="groups"} yields: HTML Elements Kramdown allows you to use block-level HTML tags ( div, p, pre, etc). Here is an example of using HTML elements: <div style="float: right"> Something that stays right and is not wrapped in a para. </div> {::options parse_block_html="true" /} <div> This is wrapped in a para. </div> <p> This can contain only *span* level elements. </p> yields: This is wrapped in a para. This can contain only span level elements.: > A nice blockquote {: title="Blockquote title"} yields: A nice blockquote Block attributes are used to generate the classes for the TOC. Warnings Warnings are used in text to call out major traps, gotchas, or caveats in guides. HTML is required to create warnings. For example: <div class="bs-callout bs-callout-danger"> <h4>WARNING</h4> <p><b>Early-adopters</b>: the following steps will <b>NOT</b> work with the currently released Rhinoceros (5.x.x). You will need to use WIP version of Rhinoceros.</p> </div> yields: WARNING Early-adopters: the following steps will NOT work with the currently released Rhinoceros (5.x.x). You will need to use WIP version of Rhinoceros. Text Modifiers Emphasis Emphasis (bold and italic) can be added to text by surrounding the text with asterisks: For example: I like *my* coffee **bold**. yields: I like my coffee bold. Links Simple Links A simple link can be created by surrounding the text with square brackets and the link URL with parentheses: This is a [link]() to the Rhino 3D homepage. yields: This is a link to the Rhino 3D homepage. You can also add title information to the link: A [link]( "Rhino 3D homepage") to the homepage. yields: There is another way to create links which does not interrupt the text flow. The URL and title are defined using a reference name and this reference name is then used in square brackets instead of the link URL: A [link][rhino3d homepage] to the homepage. [rhino3d homepage]: "Modeling tools for designers" yields: If the link text itself is the reference name, the second set of square brackets can be omitted: A link to the [Rhino3D homepage]. [Rhino3D homepage]: "Modeling tools for designers" yields: A link to the Rhino3D homepage. Anchor Links As discussed above, Headers and Sub Headers automatically create anchors in the resulting rendered html output. You can link to any anchor within a page using the hash # symbol in a normal link. For example: [Sub Headers](#sub-headers) automatically create anchors in the resulting rendered html output yields the sentence fragment shown above. To create new anchors within the site, you can use html inline. For example: <a id="top"></a> was added to the top of this page. Internal Links If you’re linking to another part of the Devloper Docs then make sure you use the {{ site.baseurl }} tag. For example: [Guides]({{ site.baseurl }}/guides/) Images Images can be created in a similar way to links: just use an exclamation mark before the square brackets. The link text will become the alternative text of the image and the link URL specifies the image source: yields: Note: Use the site.baseurl macro. See the source of this page for this section for an example. Inline Code Text phrases can be easily marked up as code by surrounding them with back-ticks: To write a line to the command line use the `Rhino.RhinoApp.WriteLine` method. yields: To write a line to the command line use the Rhino.RhinoApp.WriteLine method. Footnotes Footnotes can easily be used in Kramdown. Just set a footnote marker (consists of square brackets with a caret and the footnote name inside) in the text and somewhere else the footnote definition (which basically looks like a reference link definition): This is a text with a footnote[^1]. [^1]: This is an example of a footnote. yields: This is a text with a footnote1. Abbreviations Abbreviations will work once you add an abbreviation definition. So you can just write the text and add the definitions later on. For example: For optimal code reuse, use the MVC paradigm. *[MVC]: Model View Controller yields: For optimal code reuse, use the MVC paradigm. Abbreviations are case-sensitive. HTML Elements HTML is not only supported on the block-level but also on the span-level: This is <span style="color: red">written in red</span>. yields: This is written in red.: This is *red*{: style="color: red"}. yields: This is red. MathJax & LaTeX Kramdown has support for LaTeX to PNG rendering via MathJax. For example: $$y = {\sqrt{x^2+(x-1)} \over x-3} + \left| 2x \over x^{0.5x} \right|$$ yields: See the MathJax basic tutorial and quick reference on StackExchange.
http://developer.rhino3d.com/guides/general/developer-docs-style-guide/
CC-MAIN-2017-26
refinedweb
2,005
64.51
Unit Testing Framework The Unit Testing Framework supports unit testing in Visual Studio. Use the classes and members in the Microsoft.VisualStudio.TestTools.UnitTesting namespace when you are coding unit tests. You can use them when you have written the unit test from scratch or are refining a unit test that was generated from code you are testing.. Attributes Used to Establish a Calling Order A code element decorated with one of the following attributes is called at the moment you specify. For more information, see Anatomy of a Unit Test. For assemblies AssemblyInitialize and AssemblyCleanup are called right after your assembly is loaded and right before your assembly is unloaded. For classes ClassInitialize and ClassCleanup are called right after your class is loaded and right before your class is unloaded. For test methods Attributes Used to Identify Test Classes and Methods Every test class must have the TestClass attribute, and every test method must have the TestMethod attribute. For more information, see Anatomy of a Unit Test. The following attributes and the values assigned to them appear in the Visual Studio Properties window for a particular test method. These attributes are not meant to be accessed through the code of the unit test. Instead, they affect the ways the unit test is used or run, either by you through the IDE of Visual Studio, or by the Team System test engine. For example, some of these attributes appear as columns in the Test List Editor and the Test Results window, which means you can use them to group and sort tests and test results. One such attribute is TestPropertyAttribute, which you use to add arbitrary metadata to unit tests. For example, you could use it to store the name of a test pass that this test covers, by marking the unit test with [TestProperty("TestPass", "Accessibility")]. Or, you can generate a unit test for a private method. This generation creates a private accessor class, which instantiates an object of the PrivateObject class. The PrivateObject class is a wrapper class that uses reflection as part of the private accessor process. The PrivateType class is similar, but is used for calling private static methods instead of calling private instance methods.
http://msdn.microsoft.com/en-us/library/ms243147
CC-MAIN-2014-42
refinedweb
369
61.36
The .Net Compact Framework 3.5 Power Toys include a new utility called the NetCF Configuration Tool. The Configuration Tool is a diagnostic tool. You won't typically use it in the course of everyday development, but it will come in handy for tasks like diagnosing failures related to device configuration and authoring configuration files. While the tool isn't targeted at your application's end users, they will be able to use it with your help should you need them to provide you with information needed to debug a problem remotely. For this reason, the Configuration Tool runs directly on the mobile device instead of on a desktop machine connected to a device. After describing how to install the tool, I'll discuss its four main functional areas: The Configuration Tool isn't automatically installed on the device when you install the Power Toys. Instead, you must copy it to your device manually. Fortunately there's only one file to copy: NetCFcfg.exe. This file is OS and processor-specific so you'll find it under the appropriate WindowsCE directory in the .Net Compact Framework SDK. I'm running a WindowsMobile 5.0 device, so the Configuration Tool executable for my device is in: C:\Program Files\Microsoft.NET\SDK\CompactFramework\v3.5\WindowsCE\wce500\armv4i on my desktop machine. It doesn't matter which directory you put the file in on your device. I usually put it in the \windows directory but you can deploy the file to any location you please. The "About tab" describes which versions of NetCF are installed on a device. This is the tab that is displayed when you first launch the Configuration Tool: It's not unusual to have more than one version of NetCF installed on a device. Most all Windows Mobile and WindowsCE devices come with a version of NetCF in ROM. Another version is often installed in RAM to support an application that requires a newer version of NetCF than the version that came with the device. When multiple versions of NetCF are installed, it's not always obvious which version is being used to run your application. I've seen several cases where confusion results because an application is running with a different version than expected (see "promoting an application" for a general description of the rules used to determine which version of NetCF will be used to run an application). For example, remote tools like the CLRProfiler and Remote Performance Monitor launch an application on device in order to profile it or gather other performance statistics. If your application is launched with a version of NetCF other than the one the tool is expecting, the application may start but the tool won't be able to gather the diagnostic information it is looking for. Knowing which versions of NetCF are installed on the device, which version was used to build your application, and the rules used to determine which version will be used to run your application can help you determine whether any unexpected behavior you are seeing results from the "wrong version of NetCF" problem. The GAC tab lists the assemblies installed in the Global Assembly Cache: It helps to know the contents of the GAC when diagnosing assembly load failures. It may be you think an assembly is in the GAC when it isn't, or NetCF may load an assembly from the GAC when you don't expect it to. If you encounter a failure to load an assembly, use Loader Logging to find out why, then check the contents of the GAC using the Configuration Tool if the failure looks related to the GAC. In a previous post I described how to use device.config to cause all applications on a device to run with a given version of NetCF. The Configuration Tool enables you to edit device.config using a GUI rather than having to modify the XML by hand. The Device Policy tab let's you choose a version of NetCF to run all "unconfigured" applications on the device. By "unconfigured" I mean all applications that do not have an application configuration file that contains a supportedRuntime element. In addition to the installed versions of NetCF, you can also select the value "Default". Doing so will remove all supportedRuntime elements from device.config causing the device to revert to its default behavior for selecting a runtime. Keep in mind that not all values you select on this tab are "valid" for all applications. For example, if I chose 1.0.4292 in the example above, no applications built with later versions of NetCF would run (unless they had configurations files of their own that overrode the device-wide setting). In addition to specifying device-wide policy using the Device Policy tab, you can also specify which version of NetCF should be used to run a specific application by using the Application Policy tab. The top combo box on the Application Policy tab is pre-populated with all NetCF applications that are installed on the device. After selecting an application, use the lower combo box to select a version of NetCF to run the application. Under the covers this dialog is adding (or removing) supportedRuntime elements from your application's configuration file. Any value chosen using the Application Policy tab overrides values chosen using the Device Policy tab. Thanks, Steven This posting is provided "AS IS" with no warranties, and confers no rights. I've completed all that I had planned to write (at least for now) about how to use the CLRProfiler with NetCF. Here's a brief explanation and a link to each post the series: If. By. This. We... Steven This posting is provided "AS IS" with no warranties, and confers no rights. I was helping a customer use PInvoke to call EnumServices today and got stuck a few times so I thought it may be helpful to post the solution in case anyone else runs into this someday. EnumServices returns a buffer containing a number of structures of type ServiceEnumInfo that describe basic information about the services on a device. Each ServiceEnumInfo structure contains an embedded character array that represents the service's prefix and a pointer to a string that represents the name of the dll that implements the service. The dll names corresponding to the structures are laid out in memory just after the structures themselves. So the contents of the buffer you get back from calling EnumServices looks like this (this example is from a device with 3 services): Here's some sample code that calls EnumServices and loops the buffer pulling out both the prefix name and the dll name for each service: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Runtime.InteropServices; using System.Diagnostics; namespace EnumServices { public partial class Form1 : Form { // Managed definition of the native ServiceEnumInfo structure. Here's the corresponding native definition: // // typedef struct_ServiceEnumInfo { // WCHAR szPrefix[6]; // WCHAR szDllName; // HANDLE hServiceHandle; // DWORD dwServiceState; // } ServiceEnumInfo; // struct ServiceEnumInfo { [MarshalAs(UnmanagedType.ByValTStr, SizeConst=6)] public String prefixName; public IntPtr pDllName; // this value is a pointer to the dll name - not the dll name itself. public IntPtr hServiceHandle; public int dwServiceState; } [DllImport("coredll.dll")] private static extern int EnumServices(IntPtr pBuffer, ref int numEntries, ref int cbBuf); public Form1() { InitializeComponent(); } private void btnServices_Click(object sender, EventArgs e) { int numEntries = 0; int cbSize = 0; int structSize = Marshal.SizeOf(typeof(ServiceEnumInfo)); // call once to get required buffer size int result = EnumServices(IntPtr.Zero, ref numEntries, ref cbSize); // alloc a buffer of the correct size IntPtr pBuffer = Marshal.AllocHGlobal(cbSize); // call again to get the real stuff result = EnumServices(pBuffer, ref numEntries, ref cbSize); // loop through the structure pulling out the prefix and the dll name for (int i = 0; i < numEntries; i++) { // move a pointer along to point to the "current" structure each time through the loop IntPtr pStruct = new IntPtr(pBuffer.ToInt32()+ (i * structSize)); // "translate" the pointer into an actual structure ServiceEnumInfo sei = (ServiceEnumInfo)Marshal.PtrToStructure(pStruct, typeof(ServiceEnumInfo)); string prefix = sei.prefixName; string dllName = Marshal.PtrToStringUni(sei.pDllName); // use the prefix and dllName as needed.... Debug.WriteLine(prefix); Debug.WriteLine(dllName); } // remember to free the buffer that we allocated Marshal.FreeHGlobal(pBuffer); } } } Next week is the annual Øredev conference in Malmö, Sweden. We're very fortunate to have 3 detailed .Net Compact Framework sessions there. Doug Boling will be doing sessions on performance and on how to enable GPS in your applications and I'll be doing a session on how to use the .Net Compact Framework diagnostic tools to solve tough issues like performance problems or memory leaks. I'll also be presenting a session on Silverlight 1.1. If you're in that part of the world next week it's a great chance to get access to some .Net Compact Framework content. Doug Boling knows a ton about WindowsCE and is a great resource. The. Yesterday I started a series of posts on how the use the CLRProfiler for the .Net Compact Framework. The first post contained the basic information you need to get started. I described how to install the profiler, launch an application on the device, and collect profiling data. In order to direct the discussion, I've written a sample application that exhibits a performance problem that is surprisingly easy to fall into. Throughout these posts I'll show you how to use the profiler to diagnose the problem. To refresh your memory, the sample application is a basic game and the performance problem is that the main windows paints way too slowly. After I stopped profiling the game in the first post, following summary page was displayed. In this post I'll use some of the histograms to begin diagnosing our performance problem. The first thing that stands out at me when looking at the summary form is the amount of managed data I'm creating. While profiling the painting portion of my application I generated over 6MB of managed objects. That's clearly way too much for a relatively simple operation like painting my main window. My first step in determining what's going on is to get some basic statistics about the objects my application is using. For example, I'm interested in which objects I'm creating, how many of them there are and how long they live. This data can be obtained by looking at some of the histograms the profiler offers. I can choose to view a histogram for all objects created as my application ran or only for those objects that were in the GC heap when my application exited. In my scenario I need to look at all objects. If I were to only look at the objects alive at the end of the run I may miss some important trend that occurred earlier on. Clicking the "Histogram" button next to the "Allocated Bytes" value displays the following graph: The histogram form has two panes. The pane on the right describes how many instances of each type of object were created and the total size of those instances. The pane on the left graphs type instances by size. The color coding next to the types in the right pane matches the bars in the left pane which show the relative amounts of objects created. A quick glance at this form helps narrow my suspicions about what's causing my performance issue. As you can see, about 97% of the objects I created were of type Box.Block as indicated by the red box on the right hand pane and the red bar in the left hand pane. I can also see that each instance of Box.Block is relatively small at an average size of 136 bytes (see the right hand pane). Now that I know the majority of my objects are instances of Box.Block, I'd like to see where in my application those instances are getting created. To determine the source of my allocations I can right-click on the bar that represents Box.Block in the histogram and select "Show Who Allocated" (the bar turns black when selected): Doing so brings up a window referred to as an Allocation Graph: The Allocation Graph traces the flow of every call that allocated an instance of Box.Block. I typically interpret this graph starting with the rightmost node. This node represents all instances of Box.Block in the system. Stepping back one level to the left we see two nodes representing methods that created instances of Box.Block: Form1.RotateGameBlocks and Form1.InitializeGameBlocks. The data in these nodes tell us that 75% of the Blocks were created in RotateGameBlocks and 25% were created in InitializeGameBlocks. Notice that the width of the lines connecting the nodes represents the percentage of instances that call created. Now that I know where my objects are coming from I can dig into my code to see what's going on. In some scenarios, the information we've learned so far may be all that we need to fix the problem. However, there are a few more pieces of data that may be required in some cases. For example, it may be useful to know the times at which Blocks were created and destroyed. Also, if RotateGameBlocks and InitializeGameBlocks are long, complicated methods, we may need to know the exact calls within those methods that caused the allocations. I'll describe how to get this information in future posts. snapshot of the GC heap to a file is easy: Given an open view, just select the "Save" option from the "File" menu: You'll also be prompted to save any unsaved views when you close them. GC heap dump files are text files that include information for every object present in the GC heap at the time the heap dump was generated using RPM. Each line in the text file contains one record. There are 5 different types of records:: Example: a 2 NHLSchedule.exe 444d20df Type records identify types in the GC heap. Each type record contains 3 elements: t a1 Western.Pacific.SanJoseSharks Note: Object records describe specific instances of types in the GC heap. Object records have a variable number of elements. The first 4 elements are required: In addition to these required elements, object records will have a variable number of additional elements describing the instances that the object references. An example with referenced object ids: o 1c15d2 3 1c 1c15db 1c15d8 1c15d5 Notes: Root records identify GC roots. Each root record contains 4 required elements:: Here's a root record with the additional root container element: r b2753 4 0 13c End AppDomain records close the section initiated by a corresponding AppDomain record. End AppDomain records have three elements: If you’ve installed the Beta2 version of Orcas you may have noticed that the NetCF diagnostic tools (RPM, CLRProfler, ...) are missing. Don’t worry, these haven’t been cut from Orcas, they will just be distributed via the web in a separate “power toys” pack. A CTP of these tools is now available at: . This CTP works with Orcas Beta2. We intend to distribute the final Power Toys release at the same time that Orcas ships. This posting is provided "AS IS" with no warranties, and confers no rights. Last: Disclaimer(s):This posting is provided "AS IS" with no warranties, and confers no rights.
http://blogs.msdn.com/stevenpr/default.aspx
crawl-001
refinedweb
2,582
54.32
24 November 2009 05:48 [Source: ICIS news] By Peh Soo Hwee SINGAPORE (ICIS news)--Asian ethylene buyers will have to brace for tight supply next year due to a heavy turnaround schedule at steam crackers, but a possible increase in exports from the Middle East could help ease the strain on prices, industry players said on Tuesday. An estimated 22 crackers would be shut for maintenance in 2010 compared with 15 facilities that were taken off line in 2009 (see table below). Regional cracker operators expect ethylene prices to rise as spot cargoes would be scarce, industry sources said. “The first half of next year seems to be healthy but after that, it is very hard to forecast,” said Hun-Soo Lee, a company official from Yeochun Naphtha Cracking Centre (YNCC), South Korea’s largest naphtha cracker operator. Ethylene spot prices were hovering at their highest levels this year at $1,030-1,070/tonne (€690-717/tonne) CFR (cost and freight) northeast ?xml:namespace> The price uptrend was underpinned by limited spot supplies and firm feedstock naphtha values at above $700/tonne CFR Japan, according to global chemical market intelligence service ICIS pricing. “Eight Japanese crackers will be having turnarounds next year so there will be some support for prices. Operating rates in the country could remain at 90-95%,”said a Japanese olefins producer. Naphtha cracker operating rates in The polymer sector had maintained a healthy spread of $200-300/tonne with ethylene for most of this year, and prices of the monomer had generally outperformed market expectations in 2009, market sources said. Poor production rates in “The key tipping point is the The start-up of three new crackers in southeast Asia by 2010 could lessen the impact from the heavy turnaround schedule, industry sources said. For instance, Shell Chemicals’ 800,000 tonne/year unit in “The start-up of southeast Asian crackers will push down polymer and olefins prices and we expect the (tight) balance to ease in the second half of the year,” said a Japanese olefins trader. Other market participants, however, said that the performance of the PE sector would ultimately determine how well ethylene prices would fare in 2010. “We haven’t really seen the impact from the new polymer capacities in the Out of the 5.8m nameplate PE capacities from the At the heart of the concern was whether “If crude prices remain at high levels, there is some support for PE. On the other hand, competition will also intensify due to more PE capacities domestically and from the Middle East,” said a source close to state-owned energy giant Sinopec in Mandarin. PE producers – particularly in “We are already feeling the effects on demand,” said a Thai olefins and polyolefins producer. “Some buyers are waiting until January to make their PE purchases.” Cracker Turnaround Schedule 2010 * LG Chem will debottleneck the cracker to 1 m tonnes/year. ** BASF-YPC’s cracker capacity is expected to be increased to 740,000 tonnes/year. *** CNOOC-Shell plans to expand the cracker capacity to 1 m tonnes
http://www.icis.com/Articles/2009/11/24/9266503/Asia-faces-heavy-cracker-turnaround-schedule-in-2010.html
CC-MAIN-2015-18
refinedweb
515
55.27
Hi Guys, I've been struggling building an API for an iOS App to communicate with my Rails Web App. For authentication, I'm using Devise. How do I create, update and delete a user through the API? I have create the route: namespace :api, defaults: {format: :json} do namespace :v1 do devise_scope :user do post "/", :to => 'sessions#create' delete "/logout", :to => 'session#destroy' end end end This is where I'm stuck I think you can just simply make a POST to the same url as the sign up form normally which is /users. Same with PUT and DELETE but for logging in and out, you want to POST and DELETE to /sessions because that's a separate controller. Join 24,647+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/forum/build-rest-api-with-devise
CC-MAIN-2019-47
refinedweb
136
66.57
On Wed, Mar 05, 2014 at 07:39:27AM +1100, Dave Chinner wrote: > > +_supported_fs generic > > XFS only. Hmm. I remember fixing this up, but for some reason it didn't make it into the final patch. > > +# test creating a r/w tmpfile, do I/O and link it into the namespace > > +$XFS_IO_PROG -x -T \ > > + -c "pwrite 0 4096" \ > > + -c "pread 0 4096" \ > > + -c "freeze" \ > > + -c "thaw" \ > > + -c "shutdown" \ > > + ${SCRATCH_MNT} | _filter_xfs_io > > $testfile? No, O_TMPFILE doesn't take an actual file name but a "virtual" parent directory. It's a reall creative abuse of the open ABI.. > Also, I don't see the file being linked into the namespace, so the > comment is probably wrong. Also, please add a comment as to > why the freeze/thaw is necessary. Yes, we don't want to link it so that we have it on the unlinke dinode list. The freeze/thaw is to make sure the log has been cleaned, I'll add a comment explaining it. > There it is, but is moving it to lost+found the right thing to do, > given that it was on the unlinked list and should have had a zero > link count? i.e. aren't we supposed to free unlinked inodes with a > zero link count, not recover them to lost+found? > Yeah, that seems like the wrong behaviour to have for an anonymous > O_TMPFILE file - it's making it visible because we moved it to > lost+found in phase 6.... Good question. I thought about this a little and decided that it wasn't worth special casing O_TMPFILE inodes in repair, but thinking about it a bit more this also happens for normal unlinked but open files. I can look into this if you want, and would create another test for that case. > > Also, I don't see any icount mismatch, so the comment above in the > test is probably wrong. We do have an icount mismatch, but _filter_repair filters it away.
http://oss.sgi.com/archives/xfs/2014-03/msg00062.html
CC-MAIN-2017-43
refinedweb
326
77.47
I? Looking this up in the net, I've found these two discussions to be the most relevant: The second link mentions a library called FFCALL which can be used to pass parameters to variadics dynamically, and this probably is the ideal way of doing things. I may. Here's the code: #include <stdio.h> #include <stdarg.h> #include <stdlib.h> #include <string.h> void accepter(char *fmt, char *ptr, ...); void forwarder(char *fmt, ...); void forwarder(char *fmt, ...) { double d = 9.1; char *buf = (char*) malloc(10*sizeof(double)); memcpy(buf, (void*)((unsigned int)(&fmt)+sizeof(char*)), 80); FILE *o = fopen("tmp", "wb"); fwrite(buf, 10, sizeof(double), o); fclose(o); // call function with 10 double arguments to open up stack space accepter(fmt, buf, d, d, d, d, d, d, d, d, d, d); free(buf); } void accepter(char *fmt, char *ptr, ...) { memcpy((void*)((unsigned int)(&ptr)+sizeof(char*)), ptr, 10*sizeof(double)); va_list ap; va_start(ap, ptr); vprintf(fmt, ap); va_end(ap); } int main() { printf("Testing...\n"); double d = 65.98; forwarder("%d %d %d ermm %s and more params! %x %f %x %x \n", 1, 2, 3, "hello world", 199, d , 0xdeadbeef, 0xbeefdead); return 0; } Another way of solving this could be by wrapping your marshalled arguments as libffi arguments and do the call using libffi (disclaimer: I've never worked with libffi so this is all just based on its manual)..
http://maltanar.blogspot.ro/2010/07/forwarding-invocation-of-variadic.html
CC-MAIN-2015-40
refinedweb
235
65.83
Response - Change Status Code¶ You probably read before that you can set a default Response Status Code. But in some cases you need to return a different status code than the default. Use case¶ For example, imagine that you want to return an HTTP status code of "OK" 200 by default. But if the data didn't exist, you want to create it, and return an HTTP status code of "CREATED" 201. But you still want to be able to filter and convert the data you return with a response_model. For those cases, you can use a Response parameter. Use a Response parameter¶ You can declare a parameter of type Response in your path operation function (as you can do for cookies and headers). And then you can set the status_code in that temporal response object. from fastapi import FastAPI, Response, status app = FastAPI() tasks = {"foo": "Listen to the Bar Fighters"} @app.put("/get-or-create-task/{task_id}", status_code=200) def get_or_create_task(task_id: str, response: Response): if task_id not in tasks: tasks[task_id] = "This didn't exist before" response.status_code = status.HTTP_201_CREATED return tasks[task_id] And then you can return any object you need, as you normally would (a dict, a database model, etc). And if you declared a response_model, it will still be used to filter and convert the object you returned. FastAPI will use that temporal response to extract the status code (also cookies and headers), and will put them in the final response that contains the value you returned, filtered by any response_model. You can also declare the Response parameter in dependencies, and set the status code in them. But have in mind that the last one to be set will win.
https://fastapi.tiangolo.com/advanced/response-change-status-code/
CC-MAIN-2020-45
refinedweb
285
61.56
I've just updated to Allegro 4.1.16 and had the same problem as in this earlier thread (which was upgrading to 4.1.14):[url] Once I changed all the clear_to_colors to rectfills it worked fine. From reading the original thread again it seems that only Evert has also experienced this crashing bug. However maybe it is worth someone looking at? Cheers!Rich. Chaos Groove Development BlogFree Logging System Code & Blog Does it crash when: - clear_to_coloring a memory bitmap?- clear_to_coloring the screen with the GDI driver? If the answer is no to both, I suspect the problem to lie in the DirectX driver. --"Either help out or stop whining" - Evert It crashes with clearing a memory bitmap, I'm not sure about the screen. I think it does, but I've not tried with a GDI driver. Oh, with a memory bitmap - then it should be driver independent. I'll try it here in linux. EDIT: Just to be sure, does the below code crash for you? #include <allegro.h> int main(void) { BITMAP *bmp; allegro_init (); set_color_depth (32); bmp = create_bitmap (100, 100); clear_to_color (bmp, 0); return 0; } END_OF_MAIN() From reading the original thread again it seems that only Evert has also experienced this crashing bug. Yeah, but I have since changed my code to using clear_to_color() and clear_bitmap() again, and it works fine for me now. This is something we should definitely fix though. Well, first we should pinpoint down this a bit.. right now it can be anywhere. If the above does crash for Richard - it could be the asm code. If the above does not crash, it may be limited to a different color depth, or, ... well, no point speculating, just let's wait for the result. It crashes.:'( Does it crash with clear_bitmap() too?It shouldn't make a difference, but it's good to know for sure EDIT: just on the off chance that this changes anything, can you try calling set_gfx_mode() directly after allegro_init() and see if it still crashes? No. It only crashes with clear_to_color. WTF?! From the Allegro source code: /* clear_bitmap: * Clears the bitmap to color 0. */ void clear_bitmap(BITMAP *bitmap) { clear_to_color(bitmap, 0); } In other words... I don't understand this at all... Oh, and see my edit above. May I suggest recompiling Allergo? -----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs"Political Correctness is fascism disguised as manners" --George Carlin Heh, WTF is all I can think as well.. I'd say, either something really weird is going on, or you mixed up the DLL version somehow. When you compile without any -O option, it doesn't crash, right? I've just upgraded from 4.0.3 to 4.1.16. I previously did the same with 4.1.13 and had the same problem, even after a couple of reinstalls. Evert: Still crashes. Is it possible that some 4.0.3 code is still in there? I've not had any other problems so far though.. Elias: Yes, any form of optimisation causes a crash, but it works fine without it. You've removed all the old DLLs and libs, right? I deleted the old allegro folder. Built the new lib. copied the dll into the source directory and wrote the Allegro version number to a logfile. That said 4.1.16. Have you made a clean recompile of your project? When I change allegro versions, I always clean all object files for a project before recompiling. This is nescessary because WIP releases aren't API compatible. Otherwise, can you reproduce the problems with the test program (allegro/tests/test)? The test program works ok. I did delete all object files, and your example was built from scratch Evert. If the allegro library was built ok to be used by the test program, then is the error in linking with two different versions? Possibly. Did you run a make uninstall from the previous version before installing the new one? No. I've just done that now, and rebuilt allegro 4.1.16. The crash example in one directory now seems to work ok. But my breakout game doesn't if I still use clear_to_color. I'm wondering why it works without -O. The difference between clear_bitmap and clear_to_color is that with the former, inlining is done within the function in the DLL, while with the latter, inlining to bmp->vtable->clear_to_color is done when you compile. So another question: Are you sure the headers in %MINGDIR%\include are from 4.1.16? The vtable struct may have changed, therefore bmp->vtable->clear_to_color points to some random memory and it crashes, but only when inlined. Or, actually, when you executed "make install", did the MINGDIR variable point to the same directory which actually is used by mingw? I.e. if you have mingw in C:\mingw, and in C:\devcpp\mingw.. make sure MINGDIR points to the right one when doing "make install".. Wouldn't make install copy all the relevant headers? If you tell me which ones I need to copy manually I'll try that. See edit. To manually copy, do this inside the allegro dir: cp -r include/* MINGDIR/include/ (Sorry, don't know the windows way. But just copy everything in include into the mingdir include..) The MINGDIR var is correct as far as I can tell. I copied all the include files from the allegro include dir into the mingdir include dir. Recompiled all the project (deleting any old o files) and nothing has changed I'm afraid.. That makes me think... shouldn't the definition of clear_to_color() look like AL_INLINE(void, clear_to_color, (BITMAP *bitmap, int color), { ASSERT(bitmap && bitmap->vtable && bitmap->vtable->clear_to_color); bitmap->vtable->clear_to_color(bitmap, color); }) rather than with only ASSERT(bitmap) as it is now? Richard: could you try building a debug version of the library (DEBUGMODE=2) and see if you get assertion failures from that anywhere? Hm. Still, the idea that wrong headers are included somehow seems to make sense to me, it would explain everything. How are you compiling (complete command-line)? Can't think of anything else. Maybe, what does the below command say?gcc --print-search-dirs [EDIT]Evert: I'd say, any non-NULL bitmap can be assumed to have a valid vtable. And any bitmap vtable should have no NULL entries.. but not so sure about that last point.
https://www.allegro.cc/forums/thread/425487/0
CC-MAIN-2021-39
refinedweb
1,080
76.01
Google Test Page Contents Read The Docs! Installing To download the source files and install [Ref]: sudo apt-get install libgtest-dev cd /usr/src/gtest # Might also be /usr/src/googletest/googletest sudo mkdir build cd build sudo cmake -DCMAKE_BUILD_TYPE=RELEASE .. sudo make sudo cp libg* /usr/lib cd .. sudo rm -fr build Skeleton main() int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } Basic Test Structure Simple Tests When there is nothing to do to prepare for your tests, i.e., no setup or tear down, just use the TEST() macro to define the tests: TEST(TestCaseName, ThisTestName) { ... } Test Fixtures If you have some preparation for each test, i.e., you require some test setup and tear down, then use the TEST_F() macro. To use this macro you need to create a test-fixture object that inherits from public testing::Test: // Class is created for each test and destroyed after class MyTestFixture : public testing::Test { protected: MyTestFixture() { // Can do setup here too } ~MyTestFixture() { // Can do tear down here too } // Called before each test is run virtual void SetUp() override { ... } // Called after each test. Don't neeed to define if no tear down required. virtual void TearDown() override { ... } int m_dummy; }; ... TEST_F(MyTestFixture, ThisTestName) { // You can refer to all the members and functions in the class MyTestFixture here directly. // For example: std::cout << "Dummy is " << m_dummy << "\n"; } Parameterised Tests Replace for loops that repeat a test over a data range with parameterised tests. Loops stop each test case being independent as when one fails the loop stops so that latter cases are not tested. INSTANTIATE_TEST_CASE_P( // OR INSTANTIATE_TEST_SUITE_P?? MyTests, MyParameterizedTestFixture, ::testing::Values( 1, 2, 3, 4, 5,... ) ); class MyParameterizedTestFixture : public ::testing::TestWithParam<int> { ... }; TEST_P(MyParameterizedTestFixture, SomeTestCase) { int test_value = GetParam(); ... ASSERT_TRUE(some_predicate(..., test_value, ...)); ... } To pass multiple values just used std::make_tuple for the data items and then use std::get<n>(GetParam()), where n is the index into the parameter tuple. Basic Test Assertions The test macros are ASSERT_* or EXPECT_*. The former is a fatal error and returns from the current function and the latter prints out a warning, continues in the current function and the error is logged only after it completes. All the macros return a stream-like object that you can stream a message to that will be printed out on error. For example: ASSERT_GT(1,2) << "1 is not greater than 2"; Also note that when test macros fail, they only abort the current function, not the entire test, so if you call functions from your TEST[_F] macro-function and the sub function errors, your main test function will still continue. See Google's advanced guide for more info. Probably the easiest solution is to wrap subroutine calls with ASSERT_NO_FATAL_FAILURE(). Here are some common test macros: Passing Arguments To Your Test Program When you compile your Google Test application it includes code that parses the command line options to enable things like filtering tests etc. So, how then, do you pass command line options through to your test code? The answer is that after parsing all the Google Test relevant command line options, they are removed from argv and argc is updated appropriately so that in your main() function you can just parse argv. The remaining arguments will be all those that Google Test didn't recognise. To parse command line options there are several options including Boost, getopt(), or to roll-your-own parser [Ref]. Some examples, taken from the reference and modified slightly: static std::string getCmdOption(char **begin, char **end, const std::string &option) { char **itr = std::find(begin, end, option); if (itr != end && ++itr != end) { return std::string(*itr); } return std::string(); } static bool cmdOptionExists(char** begin, char** end, const std::string& option) { return std::find(begin, end, option) != end; } Often, if all or many tests need to query your own test options it can be a nice idea to base all of your tests off a base class that inherits from ::testing::Test and contains functions to get any options. E.g. base your test classes on this: #include <gtest/gtest.h> ... class TestBase : public ::testing::Test { public: static void Initialise(int exOpt1, const std::string ∓exOpt2) { exampleCommandLineOption1 = exOpt1; exampleCommandLineOption2 = exOpt2; } static int GetExampleCommandLineOption1(void) { return exampleCommandLineOption1; } ... private: static int exampleCommandLineOption1; static std::string exampleCommandLineOption2; }; Just call the Initialise() function before you call RUN_ALL_TESTS() and all your test classes will have access to the command line options you parsed. Filtering Tests To list your tests run with --gtest_list_tests. Just run your test executable with the --gtest_filter=test-name(s) option. The filter accepts file-globbing syntax so you can use, for example, --gtest_filter=my_new_test* to run all tests who's name is prefixed with "my_new_test". Scoped Traces GTest has a useful macro SCOPED_TRACE(streamable). It is useful because it creates a stack of messages that will automatically printed out with any error trace. This can be handy to a) see the path through the test that failed and b) see loop iteration numbers that fail. One useful little macro follows. SCIPED_TRACE accepts a streamable object but sometimes you may want to build up a string, so the following can be used. #define SCOPED_TRACE_STR(x) \ SCOPED_TRACE( \ static_cast<std::stringstream &>( \ (std::stringstream().flush() << x) \ ).str())
https://jehtech.com/c_and_cpp/google-test.html
CC-MAIN-2021-25
refinedweb
875
53.41
public class Deque<Item> implements Iterable<Item> { private int N; // size of the list private Node first; private Node last; public Deque() { private class Node { private Item item; private Node next; private Node prev; } } } You can define classes in blocks (like methods and constructors), they are called Local classes. However, just like you can't declare a local variable as private, protected or public, you also can't declare a local class with an access modifier - because it doesn't make sense to. A local class is only visible inside the defining method. If you really intend to declare a local class, remove the access modifier. But since you are declaring fields in the top-level class of type Node, you can't declare it as a local class: just move it outside the constructor. Just picking up on your comment about "I don't think static is needed": I don't think static is not needed. Adding static to nested classes should be your default action, unless you actually need to refer to the containing Deque instance from the Node instances. The thing is that each of the Node classes will have a hidden reference to the Deque, in order that you can access the Deque instance via Deque.this. If you don't need this reference, you can cut down on the memory used by not using it, which you do by making the nested class static.
https://codedump.io/share/N5O4fAagPbvu/1/illegal-modifier-for-the-nested-class
CC-MAIN-2017-04
refinedweb
239
62.72
Code Style¶ Python imports¶ isort enforces the following Python global import order: from __future__ import ... - Python standard library - Third party modules - Local project imports (absolutely specified) - Local project imports (relative path, eg: from .models import Credentials) In addition: Each group should be separated by a blank line. Within each group, all import ...statements should be before from ... import .... After that, sort alphabetically by module name. When importing multiple items from one module, use this style: from django.db import (models, transaction) The quickest way to correct import style locally is to let isort make the changes for you - see running the tests. Note: It’s not possible to disable isort wrapping style checking, so for now we’ve chosen the most deterministic wrapping mode to reduce the line length guess-work when adding imports, even though it’s not the most concise.
http://treeherder.readthedocs.io/code_style.html
CC-MAIN-2017-26
refinedweb
142
56.05
: \ Use this program like this: ! 23: \ include it, then the program you want to check ! 24: \ e.g., start it with ! 25: \ gforth depth-changes.fs myprog.fs ! 26: ! 27: \ By default this will report stack depth changes at every empty line ! 28: \ in interpret state. You can vary this by using ! 29: ! 30: \ gforth depth-changes.fs -e "' <word> IS depth-changes-filter" myprog.fs ! 31: ! 32: \ with the following values for <word>: ! 33: ! 34: \ <word> meaning ! 35: \ all-lines every line in interpret state ! 36: \ most-lines every line in interpret state not ending with "\" ! 37: ! 38: 2variable last-depths ! 39: ! 40: defer depth-changes-filter ( -- f ) ! 41: \G true if the line should be checked for depth changes ! 42: ! 43: : all-lines ( -- f ) ! 44: state @ 0= ; ! 45: ! 46: : empty-lines ( -- f ) ! 47: source (parse-white) nip 0= all-lines and ; ! 48: ! 49: : most-lines ( -- f ) ! 50: source dup if ! 51: 1- chars + c@ '\ <> ! 52: else ! 53: 2drop true ! 54: endif ! 55: all-lines and ; ! 56: ! 57: ' empty-lines is depth-changes-filter ! 58: ! 59: : check-line ( -- ) ! 60: depth-changes-filter if ! 61: sp@ fp@ last-depths 2@ ! 62: 2over last-depths 2! ! 63: d<> if ! 64: ['] ~~ execute ! 65: endif ! 66: endif ; ! 67: ! 68: sp@ fp@ last-depths 2! ! 69: ! 70: ' check-line is line-end-hook
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/depth-changes.fs?annotate=1.1;sortby=log;only_with_tag=MAIN
CC-MAIN-2021-31
refinedweb
219
90.97
In this article you will learn about static and sealed lass in C#. What is static class A static class is very similar to a non-static class, however there's one difference: a static class can’t be instantiated. In different words, you cannot use the new keyword to make a variable of that class type. As a result, there's no instance variable, you access the static class members by using class name. For example, we have the following static class which has a static method that adds two number. This is just an example, we already have a Math class in System namespace which framework has provided that has all the commonly used Math functions available: What is sealed class A sealed class cannot be inherited (means it cannot be used as a base class). It stops / restricts other classes from inheriting it. Yes, when a class is marked sealed no other classes could inherit from it. Consider the following example in which class SealedClass inherited from class BaseClass but as we have marked SealedClass sealed using sealed modifier, it cannot be used as a base class by other classes. Consider the following example: View All
https://www.c-sharpcorner.com/UploadFile/f1047f/static-and-sealed-class-in-C-Sharp/
CC-MAIN-2019-26
refinedweb
199
60.14
Finance Problems Finance 1 Assume a firm has pursued a goal of maximizing accounting profits in a purely literal sense and, as a result, has had positive, as well as growing profits since their inception. Which of the following statements is true? The firm ..... a. is pursuing the primary goal of the organization b. is acting in the best interests of all stakeholders c. is, by definition, also maximizing shareholders' wealth d. may possibly be heading toward insolvency (i.e. bankruptcy) (due to insufficient cash flow) e. None of the above or insufficient information 2 If market interest rates are currently 15% and your investment provides you this 15% return, does that imply that you are 15% more wealthy (after vs before this investment return)? Assume wealth is defined as the ability to consume (purchase) goods or services. a. Yes, because with inflation you do indeed have 15% more money b. Yes, because with inflation 15% more money does imply 15% more wealth. c. No, because with inflation 15% more money does not imply 15% more wealth. d. No, because with inflation 15% more money means your wealth increased more than 15%. e. None of the above or insufficient information 3 The primary goal of a publicly-owned firm interested in serving its stockholders should be to a. Maximize expected total corporate profit. B. Maximize expected EPS. C. Minimize the chances of losses. D. Maximize the stock price per share, I.e. owners' wealth. e. Maximize expected net income. 4 Which of the following would generally be considered the most risky (with respect to volatility of returns)? a. An investment in a portfolio of common stocks b. An investment in a single common stock randomly selected c. An investment in a single corporate bond randomly selected d. an investment in a portfolio of corporate bonds e. do not know 5%   6 You have just taken out a 30-year mortgage on your new home for $120,000. This mortgage is to be repaid in 360 equal monthly installments., If the stated (nominal) annual interest rate is 14.75 percent, what is the amount of each of the monthly installments? a. $1,515.00 b. $1,472.38 c. $1,493.37 d. $1,522.85 e. $1,440.92 7 As bond market interest rates increase, the value (i.e., price) of a fixed coupon interest rate bond (i.e., a typical corporate bond) a. does not change b. increases c. decreases d. insufficient information to answer this question e. None of the above or insufficient information 8 9 On average, the market compensates investors for taking a. Nondiversifiable, aka market risk b diversifiable risk c. Firm-specific risk d. None of the above 10 11 Since approximately 1900, historical evidence suggests that investing in common stocks has resulted in relatively high average annual returns with a. Relatively little annual variation, i.e. low risk b. Relatively high annual variation, i.e. high risk c. approximately the same annual variation as bonds d. no annual variation e. none of the above are true 13 Which of the following assets would be most suitable to financing with relatively larger amounts of long-term debt? a. Current assets, such as inventory b. Specialized long term assets c. Intangible long term assets d. Tangible (physical), standardized, and widely tradable fixed assets e. This question is irrelevant because firms should avoid using debt whenever possible. 14 Which of the following financial assets would be most susceptible (vulnerable) to a decline in value if interest rates increased? a. a short term fixed income financial asset (ex. short term bond) b. a long term fixed income financial asset (ex. long term bond) c. a long term variable interest rate income financial asset d. they would all be approximately equally susceptible to a decline in value. e. None of the above or insufficient information 15 Assume that you can buy a bond for $555 today. The bond will pay you $75 in annual coupon payments (i.e. interest payments) at the end of each of the next 12 years, plus repay the original $1000 par value of the bond at the end of the 12th year. What annual rate of return would you expect to earn on the investment (i.e., what is the bond's YTM?)? (Hint: use your basic TVM keys) a. 15.7 % b. 16.1 % c. 17.6 % d. 16.5 % e. None of the above or insufficient information 16 All of the following are advantages a corporation may have over a partnership or proprietorship, except which one? A. Limited liability. B. Ease of transfer of ownership interest. C. Unlimited life. D. Elimination of double taxation. E. Ability to raise capital. 17 If a firm's current ratio is 4, the firm could liquidate its current assets at only ______ percent of their book value and just have enough (nothing extra from current assets) to still pay off the current liabilities in full. a. insufficient information to answer; need the inventory amount b. insufficient information to answer; need the dollar amounts of CA and CL c. 40% d. 25% e. A current ratio has nothing to do with the question being asked 18 Which of the following would least likely be considered as signaling a potential problem regarding the "quality of earnings" for a firm? a. the firm has experienced a significant increase in earnings relative to the industry overall b. the firm's accounts receivable account is increasing at a rate faster than the firm's increase in sales. c. the firm has announced a delay in their release of financial statements due to a change in auditors d. the firm's accounts receivable account is increasing, but at a rate slower than the firm's increase in sales. e. all of the above would be considered signals of potential problems regarding he firms' quality of earnings 19 The extended Du Pont equation, a. k. a. the 3 components ROE decomposition equation, (i.e.ROE = (profit margin)x(total asset turnover)x(equity multiplier) is used to a. compute the firm's ROE, as the equation states. b. decompose the firm's ROE into sub-components, for a better understanding of the firm's financial health. c. determine if the firms is liquid d. compute the firm's ROA, as the equation states. 20 Bank A charges 16% APR on auto loans with monthly compounding. What is the Effective Annual Interest Rate (EAR)? a. 16%, since EAR = APR for monthly compounding b. 13.3% c. 1.33% d. 17.23% e. 18.12% f. insufficient information to answer this question (if using bubble sheet just write in "f" to select f). 24 The total economic (true, i.e. true financial value) value of the firm is (Please READ ALL alternatives before answering): a. Found on the balance sheet b. Equal to the total market value of the stockholders' equity c. equal to the total market value of the all of the firm's assets d. Equal to the total market value of the owners and creditors' claim on the firm, by the balance sheet equation( aka accounting equation) e. both c and d are correct 25 Stockholders possess several devices which help align management goals with the stockholders' goals. Included among these are all of the following except: a. The right to sell their stock back to the company (I.e. the right to demand the company to repurchase their stock). b. incentive compensation plans for management c. the right to elect directors d. the right to replace or fire managers via the board of directors 26 Because of the limited diversification potential of human capital, managers have an incentive to seek: a. Higher risk projects because they offer the potential for higher returns (payoffs) for the managers b. higher return projects because they are less risky c. lower risk projects, because these projects are in the best interest of all stakeholders d. lower risk projects, because these projects are in the best interest of stockholders e. lower risk projects, because these projects reduce the probability of the firm going bankrupt. 27 (if using bubble sheet just write in "f" to select f) 28 Your subscription to Jogger's World is about to run out and you have the choice of renewing it by sending in the $10 a year regular rate at the end of each year or of getting a lifetime subscription to the magazine by paying $100 today. Your cost of capital is 7 percent. How many years would you have to live to make the lifetime subscription the better buy? Assume payments for the regular subscription are made at the end of each year. (Round up if necessary to obtain a whole number of years.) a. 10 years b. 15 years c. 18 years d. 20 years e. 28 years 29 Managerial stock options are an incentive for managers to act in the best interest of: a. stockholders b. bondholders c. employees d. government leaders e. the public f. banker 30 Suppose you deposit $2000 into an account at the end of each of the next 10 years. If the account earns 12%, how much will be in the account at the end of 30 years? A. 35,097 b. 192,926 c. 338,560 d. 482,665 e. Insufficient information to compute 31 The simple corporation has an outstanding debt obligation (total of principle and interest due) to the Complex Corporation of $250 (Assume this is Simple's only debt.). It is year end and the total cash flow of Simple from all sources is $325. The contingent payoff to the debt and equity holders of Simple Corporation is: a. $250; $75 b. $250; $325 c. $75; $250 d. $325; $250 e. none of the above 32 According to the financial perspective, value creation is based on (Please READ ALL alternatives before answering): a. The cash flows of the firm b. the timing & risk of the cash flows c. the net income, aka profits, of the firm d. both a & c e. both a & b 33 Financial markets are generally recognized as being semi-strong form efficient, which means: a. All publicly available information is reflected in current prices b. All available information, both public and private is reflected in current prices c. only all past price information is reflected in current prices d. there is no opportunity for consistently earning returns on investments 34 Double taxation refers to: a. dividends being paid after corporate taxes are paid, and thus being taxed twice at the corporate level b. dividends being paid after corporate taxes are paid, and thus being taxed once at the corporate level and again at the personal level when an investor receives the dividend c. interest being paid after corporate taxes are paid, and thus being taxed twice at the corporate level d. interest being paid after corporate taxes are paid, and thus being taxed once at the corporate level and again at the personal level when an investor receives the interest e. None of the above 35 30 year corporate bonds are an example of a a. money market security b. capital market security c. mutual fund d. marketable option 36 The government has issued a bond that will pay $1000 in 25 years. The bond will pay no interim coupon payment. What is the present value of the bond if the discount rate is 10% a. 92.30 b. 1000 c. 9077 d. 9169 e. 9230 37 Assume that you invested $5000 in an account that is expected to average 10% return per year for the next 30 years. How much do you expect to have in the account at the end of the 30 years? A. 5000 b. 87,247 c. 150,000 d. 822,470 e. insufficient information to compute 38 Assuming the current ratio is currently 2.0, which of the following actions will increase the ratio? a. purchasing inventory with cash b. purchasing inventory with short-term credit c. paying off a short-term bank loan with a long-term debt d. a customer paying an overdue bill e. all of the above will increase the current ratio 39 Assume the following for your corporation: sales (aka revenue) $250 Cost of goods sold 160 depreciation 35 Interest Expense 20 tax rate = 34% What is the corporation's total after tax net income? a. 23.10 b. 11.90 c. 35.00 d. 46.20 e. 36.30 40 In a reasonably efficient market, at the time of an announcement, market prices react to; a. The announcement of new information that was unanticipated b. The announcement of new information that was previously fully anticipated c. Both, i. e., both the announcement of new information that was previously fully anticipated, as well as information that was unanticipated d. neither, because market price movements are random Questions 41-43 address a stock valuation problem and are related, though there are assumptions made in sequential questions to avoid an initial error causing all subsequent responses to be in error. 41 Worldwide Inc., a large conglomerate, has decided to acquire another firm by purchasing the firm's outstanding stock from the stockholders. Analysts of the firm to be purchased are forecasting a period of 2 years of extraordinary growth (20 percent), followed by 1 year of unusual growth (10 percent), and finally a normal (sustainable) growth rate of 6.5 percent annually indefinitely. The last dividend was D0= $1.00 per share and the required return is 8.6 percent. What is D4 (i.e., the dividend expected at end of period 4)? a. 1.0000 b. 1.286 c. 1.584 d. 1.687 e. 1.440 42 Assuming D4 ( I.e., the dividend at end of period 4) is expected to be $2.00 (regardless of your answer above), what is P3 (i.e., expected price at the end of period 3)? A. 0.95 b. 2.00 c. 23.26 d. 30.77 e. 95.24 f. Insufficient information to compute (if using bubble sheet, just write "f" in to select f) 44 Assume the following regarding a growing annuity valuation problem: Your salary at the end of the last year that you work is $90,000. You would like your income stream to begin at the end of your first year of retirement with a payment equal to 70% of your last working year's salary. (Assume all amounts are "end-of-year" payments.) You plan to be retired for 25 years. You would like your retirement income will grow at a constant rate equal to a 3.5% (to compensate for expected inflation). Using a discount rate of 8%, what is the present value at the beginning of your first year of retirement, (i.e. one period prior to the first retirement payment) of your projected 25 year retirement income stream? a. 960,730 b. 916,893 c. 672,511 d. 211,573 e. 3,308,543 f. 483,107 (if using a bubble answer sheet, just write "f" in to select f) 45 ** ASSUME that in 25 years you will need $500,000 for your retirement (i.e. retirement is actually 25 years away, and you want to have saved $500,000). How much money would you have to put into a bank today to accumulate this if your money will earn 8% per year (assume annual compounding)?. a. 73,009 b. 166,365 c. 211, 573 d. 676,001 e. insufficient information to compute 46 ** ASSUME that in 25 years you will need $500,000 for your retirement (i.e. retirement is actually 25 years away, and you want to have saved $500,000). If you will make equal MONTHLY payments at the end of each MONTH for the next 25 years to fund your retirement, what is the amount of the MONTHLY payments required to fund your retirement? Assume the 8% APR discount rate with monthly compounding for this question only. a. 3859 b. 3903 c. 570 d. 526 e. insufficient information to compute 47 Consider three investors in Stock A: Mr. Single invests for 1 year Ms Double invests for 2 years Mrs. Triple invests for 3 years Which of the following statements are true? (Hint: Recall what the basic stock valuation principle is.) a. Mrs. Triple would place the highest value on Stock A because she is investing for the longest time. b. Ms. Double would place the second highest value on Stock A because she is investing for the second longest time. c. Mr. Single would place the lowest value on Stock A because he is investing for the shortest time. d. all of the above are true e. Mr. Single, Ms. Double, and Mrs. Triple would all place the same value on Stock A f. none of the above are true (if using bubble sheet, just write "f" in to select f) 48 A practical and prevalent problem in financial management regarding potential conflicts of interests of various parties is known as a. ethics b. marginal conflicts c. limited liability d. the options principle e. agency problems 49 Fill in the Blank: The tax deductibility of expenses ______ their after tax cost (assume a tax paying firm). a. increases b. decreases c. has no effect on d. has an undetermined effect on e. either increases or decreases, depending on whether it is for a sole proprietorship or a corporation 50 The ______ is a measure of liquidity which excludes ______, generally the least liquid asset. A. current ratio; accounts receivable B. quick ratio; accounts receivable C. current ratio; inventory D. quick ratio; inventory E. none of the above 51 Which of the following would be classified as a use of cash? a. An increase in accounts payable. b. A decrease in inventories. c. A decrease in accounts receivable. d. An increase in retained earnings. e. An increase in inventories. 52 Santa's shippers Inc. Dividends per share are expected to grow indefinitely by 3 percent a year. Next year's dividend is $4.50 and the required rate of return (i.e. equity holder's opportunity cost of capital) is 8 percent. Assuming this is the best information available regarding the future of this firm, what would be the most economically rational value of the stock today (i.e. today's "price")? a. 56.25 b. 150.00 c. 90.00 d. 92.70 e. 45.00. 54 According to the payoff diagram that illustrates the payoff to bondholders and stockholders as a function of the value of the firm, a. bondholders would prefer more risk than stockholders, because their payoff is flat as the firm value increases beyond the debt level b. stockholders would prefer more risk than bondholders, because their payoff is flat as the firm value increases beyond the debt level c. stockholders would prefer more risk than bondholders, because their payoff increases dollar for dollar as the firm value increases beyond the debt level d. managers would prefer more risk than stockholders, because their payoff increases dollar for dollar as the firm value increases beyond the debt level 55 You currently earn $35,000 per year. If your salary grows at an assumed 3.5% average inflation rate, how much will your annual salary be in 25 years? a. $82,713 b. $79,916 c. $1,363,245 d. $1,445,960 e. Insufficient information to compute 57 You are considering the purchase of an investment that would pay your that you would be willing to pay for this investment? a. $15,819.27 b. $21,937.26 c. $32,415.85 d. $38,000.00 e. $52,815.71 58 When evaluating whether to proceed with a project, the firm should consider all of the following factors except which one? (i.e., Which is a "not relevant" versus "relevant" cash flow?) a. Changes in working capital attributable to the project b. previous expenditures associated with a market test to determine the feasibility of the project. c. the current market value of any equipment to be sold and replaced. d. the resulting difference in depreciation if the project involves a replacement decision. e. all of the above should be considered. 59 While doing a capital budgeting analysis you realized that the project would require an increase in inventory of $8,000. You should a. Ignore the inventory requirement because it is not an operating cash flow. b. Record the $8,000 at time zero as an additional benefit of taking the project. c. remember to depreciate the $8,000 over the depreciable life of the project. d. record the $8,000 at time zero as an additional cost of taking the project. e. none of the above are accurate. 60 Normal (a.k.a. conventional cash flow, i.e. costs followed by cash inflows) Projects Q and R have the same NPV when the discount rate is zero. However, Project Q has larger early cash flows that R. Therefore, we know that at all discount rates greater than zero Project Q will have a _________ NPV than R. (Hint: With larger early CFs, Q is effectively shorter term than R., Which is more sensitive to changes in interest rated in an NPV profile?) a. greater. b. smaller. c. equal, since they have the same NPV when the discount rate is zero. d. you need to know the interest rate to answer this question. e. you need to know the actual cash flows to answer this question. 61 A stock repurchase may be a signal that a. A firm's stock is overvalued. b. A firm's stock is undervalued c. A firm is short on funds. d. A firm's bonds are overvalued. e. none of the above are accurate. 62 A primary advantage associated with holding a diversified portfolio of financial assets is the reduction of risk. The relevant (aka important) risk a particular stock would contribute to a well diversified portfolio is the stock's : a. Total risk, as measured by the stock's beta. b. nondiversifiable, aka market risk, as measured by the stock's beta. c. nondiversifiable, aka market risk, as measured by the stock's standard deviation. d. unique risk, as measured by the stock's standard deviation. e. unique risk, as measured by the stock's beta. 63 Which of the following measures how 2 random variables (e.g.. Stock returns) move relative to each other? a. Standard deviation b. Variance c. Expected value d. Covariance e. None of the above 64 Which of the following would tend to make a financial market more efficient? a. Increase in taxes b. Increase in asymmetrical information c. Decrease in asymmetrical information d. Higher transaction costs e. Fewer competitors (participants) 65 The pecking order view of capital structure suggests that for financing new projects, firms prefer a. borrowing (debt) over issuing more equity. b. Internally generated funds over borrowing. c. Equity over debt. d. Paying out all of the firm's earnings as dividends to existing shareholders to maximize shareholders' wealth. e. Both a & b. 66 Efficient portfolios all have a. no risk b. Equal risk c. The highest return for a given risk d. The lowest risk for a given return e. Both c & d For questions 67-68, consider the following information for the BU Scholarship Investment Fund. The total investment in the fund is $1 million. STOCK INVESTMENT BETA EXPECTED RETURN A $200,000 1.5 25% B $300,000 -0.5 4% C $500,000 1.25 15% 67 Based on the allocation of dollars among the three stocks and their expected return, calculate the expected rate of return for the BU Scholarship Investment Fund. a. 14.67% b. 18.8 c. 13.7 d. 44.0 e. Insufficient information to compute 68 Based on the allocation of dollars among the three stocks and their respective betas, calculate the beta for the BU Scholarship Investment Fund. a. 3.25 b. 1.08 c. 2.25 d. 0.75 e. 0.775 f. Insufficient information to compute. (if using bubble sheet, just write "f" in to select f) 69 The existence of a risk-less security in the risk & return trade-off a. Does NOT influence investors preferences regarding which risky portfolio to hold. b. Results in investors all holding different portfolios of risky assets, depending on their individual risk preferences. c. Results in investors all holding the same portfolio of risky assets, which corresponds to the tangency point of the efficient portfolio frontier of risky assets and a line through the risk-less asset's return. d. None of the above. e. none of the above. 70 Assume the risk free rate is 4.5% and the expected return on the market is 14% . Based on the CAPM, what should be the rate of return for security having a beta of 1.25? A. 11.88% b. 16.38 c. 18.5 d. 17.5 e. 22 71 How many different portfolios could be formed with only 2 assets? a. 1 b. 2 c. 4 d. 16 e. An infinite number Questions 72-76 address a capital budgeting problem and are related, though there are assumptions made in sequential questions to avoid an initial error causing all subsequent responses to be in error. Consider the following for questions 72-76: A new product is being considered by Stanton Corp. An outlay of $40,000 is required for equipment and an additional net working capital investment of $1000 is required. The project is expected to have a 4 year life and the equipment will be depreciated on a straight line basis (equal annual amount) to a $4,000 book value. Producing the new product will reduce current manufacturing expenses by $5,000 annually and increase earnings (revenue) before depreciation and taxes by $6,000 annually. Stanton's marginal tax rate is 40 percent. Stanton expects the equipment will have a market salvage value of $10,000 at the end of 4 years. 72 What is the total cost at time zero of accepting this project? a. 40,000 b. 41,000 c. 30,000 d. 31,000 e. Insufficient information to answer 73 What is the depreciation each year over the machine's 4 year life? a. 9,000 b. 9,250 c. 10,000 d. 10,250 e. Insufficient information to answer. 74 Regardless of your answer to number 73 above, ASSUME DEPRECIATION = $8,000 per year. What is the project's after-tax operating cash flow during years 1-4 from the machine? A. 11,000 b. 3,000 c. 6,600 d. 14,600 e. 9,800 75 Assuming the equipment is sold for the expected $10,000 market salvage value at the end of its 4 year life, compute the after tax salvage value of the equipment. Note: this question addresses ONLY the after-tax salvage value, i.e., the after-tax cash flow from the sale of the equipment. This question does NOT address any other terminal year cash flows. a. 4,000 b. 6,000 c. 7,600 d. 10,000 e. none of the above 76 Regardless of your answer to number 74 & 75 above, ASSUME the project's after-tax operating cash flow during years 1-4 from the machine = $8,000 and the after tax salvage value = $7,000. What is the TOTAL cash flow expected from this project in the terminal year, including any initial investment amounts assumed to be recovered? Include all terminal year flows as well as the terminal year operating cash flow of 8,000 assumed. a. 7,000 b. 8,000 c. 15,000 d. 16,000 e. none of the above For questions 77-78, assume the following for a project under evaluation: ** The project's life is 4 years. ** The total time zero, initial cost of $55,000. ** The total net operating cash flow each year is $15,000. ** In addition to the terminal year operating cash flow, there is a non-operating, terminal year cash flow of $8,000. 77 If the cost of capital for a project of this risk is 7%, what is the project's NPV? Accept or reject the project? A. 123,000; accept b. 13,000; accept c. -56,911; reject d. 1,911; accept e. 13,355; accept 78 What is the project's IRR? Accept or reject the project? Again, assume the cost of capital for a project of this risk is 7%. A 7%; indifferent to accept or reject b. 8.4%; reject c. 8.4%; accept d. 15.75%, reject e. 15.75%: accept 79 If 2 projects are not mutually exclusive, then they are, by definition, independent projects. a. True b. False 80 All of the following are advantages of debt financing except which one? a. Interest is a tax-deductible expense b. It allows for the use of "other people's money" in financing a business c. It results in loss of ownership control of the business. d. The cost of debt financing is generally cheaper than equity financing. e. Owners do not have to share the potential gains of the business, since debt only requires repayment of the amount owed. 81 In 10 years you will begin receiving $155 dollars per year in perpetuity from your grandparent's family trust fund (first payment is exactly 10 years e. 1550 82 The semi-annual interest payments that corporate bonds in the U.S. typically pay are conventionally referred to as a. yield payments b. coupon payments c. call payments d. premium payments e. dividends 83 In which of the following situations would you get the largest benefit from diversifying your investment across two stocks? a. there is perfect positive correlation. b there is perfect negative correlation. c. there is modest positive correlation. d. there is modest negative correlation. e. there is not correlation. 84 While the covariance can very between vary large positive and negative numbers, the correlation coefficient varies only between a. -1 and 0 b. 0 and +1 c. -1and +1 d. 0 and 10 e. None of the above 85 Regarding expensing an asset's cost immediately versus capitalizing the cost and depreciating it over time: All else the same, given a choice, a tax paying firm would generally prefer to a. Capitalize the cost and depreciate, because this will make the current year's net income higher. b. capitalize the cost and depreciate, because it is best to extend the depreciation tax shield into the future as much as possible. c. expense the asset to capture the tax shield immediately and thereby maximize the present value of the tax shield. d. expense the asset to maximize the current year's net income. e. none of the above. 86 If a tax paying firm pays $100,000 in interest what is the after tax interest cost for firm assuming they are in a 40% tax bracket? a. 100,00 since interest is paid after taxes are paid and thus there is no tax shield b. 40,000 since there is a tax shield on interest c. 60,000 since there is a tax shield on interest d. insufficient information to compute 87 A capital budgeting decision tool, such as NPV, IRR, etc. should consider (use) a. all of the relevant cash flows b. only some of the relevant cash flows c. only the opportunity costs d. only the operating cash flows 88 One needs an interest rate to be able to calculate an IRR. A. true b. false 89 One needs an interest rate to be able to calculate an NPV. A. true b. false 90 If an investment has an NPV =0, then a. this means the investor earned no money b. this means the investor earned more than the required rate of return (i.e., cost of capital) c. this means the investor earned less than the required rate of return (i.e., cost of capital) d. this means the investor earned a return just equal to the required rate of return (i.e. the cost of capital rate at which the NPV was calculated)      View Full Posting Details
https://brainmass.com/economics/bonds/finance-problems-328703
CC-MAIN-2018-51
refinedweb
5,381
66.64
Palm::MaTirelire::SavedPreferences - Handler for Palm system preferences use Palm::MaTirelire::SavedPreferences; The MaTirelire::SavedPreferences PRC handler is a helper class for the Palm::PDB package. It parses Palm system saved preferences resources database, ignore (don't modify) all preferences except Ma Tirelire v1 and v2 ones.. To be done XXX... This module have to be reworked to be more generic and each application, that use system preferences, should attach to it like it does for PRC or PDB handler. It would be put in Palm:: namespace instead. Maxime Soulé, <max@Ma-Tirelire.net> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.5 or, at your option, any later version of Perl 5 you may have available.
http://search.cpan.org/~maxs/Palm-MaTirelire-1.12/lib/Palm/MaTirelire/SavedPreferences.pm
CC-MAIN-2013-20
refinedweb
134
56.15
Functional Reactive Programming with Elm: An Introduction This article was peer reviewed by Moritz Kröger, Mark Brown and Dan Prince. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! Elm is a functional programming language that has been attracting quite a bit of interest lately. This article explores what it is and why should you care. Elm’s current main focus is making front-end development simpler and more robust. Elm compiles to JavaScript so it can be used for building applications for any modern browser. Elm is statically typed language with type inference. Type inference means that we don’t need to declare all the types ourselves, we can let the compiler infer many of the types for us. For example by writing one = 1, the compiler knows that one is an integer. Elm is an almost pure functional programming language. Elm builds on top of many functional pattern like pure views, referential transparency, immutable data and controlled side effects. It is closely related to other ML languages like Haskell and Ocaml. Elm is reactive. Everything in Elm flows through signals. A signal in Elm carries messages over time. For example clicking on a button would send a message over a signal. You can think of signals to be similar to events in JavaScript, but unlike events, signals are first class citizens in Elm that can be passed around, transformed, filtered and combined. Elm Syntax Elm syntax resembles Haskell, as both are ML family languages. greeting : String -> String greeting name = "Hello" ++ name This is a function that takes a String and returns another String. Why Use Elm? To understand why should you care about Elm, let’s talk about some front-end programming trends in the last couple of years: Describe State Instead of Transforming the DOM Not long ago we were building applications by mutating the DOM manually (e.g. using jQuery). As our application grows we introduce more states. Having to code the transformations between all of them exponentially grows the complexity of our application making it harder to maintain. Instead of doing this, libraries like React have popularised the notion of focusing on describing a particular DOM state and then let the library handle the DOM transformations for us. We only focus on describing the discreet DOM states and not how we get there. This leads to substantially less code to write and maintain. Events and Data Transformation When it comes to application state, the common thing to do was to mutate the state ourselves e.g. adding comments to an array. Instead of doing this we can only describe how the application state needs to change based on events, and let something else apply those transformations for us. In JavaScript, Redux has made popular this way of building applications. The benefit of doing this is that we can write ‘pure’ functions to describe these transformations. These functions are easier to understand and test. An added benefit is that we can control where our application state is changed, thus making our applications more maintainable. Another benefit is that our views don’t need to know how to mutate state, they only need to know what events to dispatch. Unidirectional Data Flow Another interesting trend is having all our application events flow in a unidirectional way. Instead of allowing any component talk to any other component, we send message through a central message pipeline. This centralized pipeline applies the transformations we want and broadcasts the changes to all the parts of our application. Flux is an example of this. By doing this we gain more visibility of all interactions that happen in our application. Immutable Data Mutable data makes it very hard to restrict where it can be changed, as any component with access to it could add or remove something. This leads to unpredictability, as state could change anywhere. By using immutable data we can avoid this, by tightly controlling where application state is changed. Combining immutable data with functions that describe the transformations gives us a very robust workflow, and immutable data helps us enforce the unidirectional flow by not letting us change state in unexpected places. Centralized State Another trend in front-end development is the use of a centralized ‘atom’ for keeping all state. Meaning that we put all state in one big tree instead of having it scattered across components. In a typical application we usually have global application state (e.g. a collection of users) and component specific state (e.g. the visibility state of a particular component). It is controversial whether storing both kinds of state in one place is beneficial or not. But at least keeping all application state in one place has a big benefit, which is providing a consistent state across all components in our application. Pure Components Yet another trend is the use of pure components. What this means is that given the same inputs a component will always render the same output. There are no side effects happening inside these components. This makes understanding and testing our components far easier than before, as they are more predictable. Back to Elm These are all great patterns that make an application more robust, predictable, and maintainable. However to use them correctly in JavaScript we need to be diligent to avoid doing some things in the wrong places (e.g. mutating state inside a component). Elm is a programming language that has been created from the beginning with many of these patterns in mind. It makes it very natural to embrace and use them, without worrying about doing the wrong things. In Elm we build applications by using: - Immutable data - Pure views that describe the DOM - Unidirectional data flow - Centralized state - Centralized place where mutations to data are described - Contained side effects Safety Other big gain of Elm is the safety that it provides. By completely avoiding the possibility of values being null, it forces us to handle all alternative pathways in an application. For example, in JavaScript (and many other languages) you can get run time errors by doing something like: var list = [] list[1] * 2 This will return NaN in JavaScript, which you need to handle to avoid a runtime error. If you try something similar in Elm: list = [] (List.head list) * 2 The compiler will reject this, telling you that List.head list returns a Maybe type. A Maybe type may or may not contain a value, we must handle the case where the value is Nothing. (Maybe.withDefault 1 (List.head list)) * 2 This provides us with a lot of confidence in our applications. It is very rare to see runtime errors in Elm applications. Sample Application To get a clearer picture of the Elm language and how applications are built with it, let’s develop a tiny application that shows an HTML element moving across a page. You can try this application by going to and pasting the code there. import Html import Html.Attributes exposing (style) import Time name : Html.Html name = Html.text "Hello" nameAtPosition : Int -> Html.Html nameAtPosition position = Html.div [ style [("margin-left", toString position ++ "px")] ] [ name ] clockSignal : Signal Float clockSignal = Time.fps 20 modelSignal : Signal Int modelSignal = Signal.foldp update 0 clockSignal update : Float -> Int -> Int update _ model = if model > 100 then 0 else model + 1 main : Signal Html.Html main = Signal.map nameAtPosition modelSignal Let’s go over it piece by piece: import Html import Html.Attributes exposing (style) import Time First we import the modules we will need in the application. name : Html.Html name = Html.text "Hello" name is a function that returns an Html element containing the text Hello. nameAtPosition : Int -> Html.Html nameAtPosition position = Html.div [ style [("margin-left", toString position ++ "px")] ] [ name ] nameAtPosition wraps name in a div tag. Html.div is a function that returns a div element. This function take an integer position as a unique parameter. The first parameter of Html.div is a list of HTML attributes. The second parameter is a list of children HTML elements. An empty div tag would be Html.div [] []. style [("margin-left", toString position ++ "px")] creates a style HTML attribute, which contains margin-left with the given position. This will end as style="margin-left: 11px;" when called with position 11. So in summary nameAtPosition renders Hello with a margin on the left. clockSignal : Signal Float clockSignal = Time.fps 20 Here we create a signal that streams a message 20 times per second. This is a signal of floats. We will use this as a heartbeat for refreshing the animation. modelSignal : Signal Int modelSignal = Signal.foldp update 0 clockSignal clockSignal gives us a heartbeat, but the messages it sends through the signal are not useful, the payload of clockSignal is just the delta between each message. What we really want is a counter (i.e 1, 2, 3, etc). To do this we need to keep state in our application. That is take the last count we have and increase it every time clockSignal triggers. Signal.foldp is how you keep state in Elm applications. You can think of foldp in a similar way to Array.prototype.reduce in JavaScript, foldp takes an accumulation function, an initial value and a source signal. Each time the source signal streams an event, foldp calls the accumulation function with the previous value and holds onto the returned value. So in this case, each time clockSignal streams a message, our application calls update with the last count. 0 is the initial value. update : Float -> Int -> Int update _ model = if model > 100 then 0 else model + 1 update is the accumulation function. It takes a Float which is the delta coming from clockSignal as first parameter. An integer which is the previous value of the counter as second parameter. And returns another integer which is the new value of the counter. If the model (previous value of the counter) is more than 100 we reset it to 0, otherwise just increase it by 1. main : Signal Html.Html main = Signal.map nameAtPosition modelSignal Finally, every application in Elm start from the main function. In this case we map the modelSignal we created above through the nameAtPosition function. That is, each time modelSignal streams a value we re-render the view. nameAtPosition will receive the payload from modelSignal as first parameter, effectively changing the margin-left style of the div twenty times per second, so we can see the text moving across the page. The application we just built above demonstrates: - HTML in Elm - Using signals - Keeping state the functional way - Pure views If you have used Redux you will note that there are several parallels between Elm and Redux. For example update in Elm is quite similar to reducers in Redux. This is because Redux was heavily inspired by the Elm architecture. Conclusion Elm is an exciting programming language which embraces great patterns for building solid applications. It has a terse syntax, with a lot of safety built-in which avoids runtime errors. It also has a great static type system that helps a lot during refactoring and doesn’t get in the way because it uses type inference. The learning curve on how to structure an Elm application is not trivial, as applications using functional reactive programming are different to what we are used to, but is well worth it. Additional Resources - When building large applications in Elm it is a good practice to use the Elm architecture. See this tutorial for more information. - The Elm Slack community is an excellent place to ask for help and advice. - The Pragmatic Studio videos on Elm are an excellent resource to get you started. - Elm-tutorial is a guide I’m working on to teach how to build web applications with Elm.
https://www.sitepoint.com/functional-reactive-programming-elm-introduction/
CC-MAIN-2022-21
refinedweb
1,974
65.52
Ok I have an assignment due.. and I need help tackling this assignment! Here is the info I have.. Im just really confused on how to start this project... For these two labs you are going to construct functionality to create a simple address book. Conceptually the address book uses a structure to hold information about a person and an array of structures to hold multiple persons (people). Visually think of the address book like so: When you add a person to the Address Book you add a structure with the information about the person to the end of the array: When you get a person, you get the first person in the address book. With each successive call to get a person, you get the next person in the array. For instance the first call to get a person you will get "Joe Smith" when you make the second call to get a person you would get "Jane Doe" so on and so forth. After you get the last person from the array the next call to get a person will start over at the beginning ("Joe Smith" in this case). Details: 1.) Create a project called addressBook 2.) Add a header file and cpp file for your project 3.) All of your definitions should go in the header file. 4.) In the header file create the definition for your structure. Call it PERSON. 5.) Your structure should have fields for first name, last name, address, and optionally a phone number. 6.) Inside the cpp file you will create the functionality for your address book 7.) Inside the cpp file declare a global array of 10 PERSONS to hold all of the records in your address book call it people. Use a const called MAXPEOPLE to set the size of the array. Put the const in the header file. 8.) You are probably going to want to declare an integer variable to keep track of where you are at in the array. 9.) Create functions addPerson, getPerson 10.) These functions should take as arguments a reference to a PERSON structure. 11.) The addPerson method should copy the structure passed to it to the end of the array 12.) the getPerson should start at array element 0 and with each successive call return the next person in the array. 13.) Create overloaded findPerson functions. One function should take only the persons last name 14.) The other function should take both the persons last and first names. 15.) All code for the functions should be in the cpp file 16.) From main write functionality that will test your address book code All of the functions that are part of the address book should take a reference to a PERSON structure as one of its arguments. This is not necessarily the only argument for each function but should be one of them. Structures and Arrays Page 1 of 1 Need help! 7 Replies - 3725 Views - Last Post: 07 October 2008 - 05:56 PM #1 Structures and Arrays Posted 06 October 2008 - 08:29 PM Replies To: Structures and Arrays #2 Re: Structures and Arrays Posted 06 October 2008 - 08:35 PM The site's policy is to show some effort before you're helped, but I'll give you an idea of how to proceed. Hope that helps! class AddressBook { //What exactly is an address book? //it is a listing of people's names, phone # // and addresses //So we're looking at a three dimensional array, yuck right? //STL Maps would be ideal here since you can associate second // tier information with the person's name as the key of the Map //*******ALTERNATE IDEA******* //Define a struct or another class called Person, it will //contain person's name, address, etc... see below } struct Person //you can make an array of these { char* name; // or string, depends on what you feel like int phoneNum; string address; // or a char pointer/array, again up to you } Hope that helps! #3 Re: Structures and Arrays Posted 06 October 2008 - 08:36 PM killakev, on 6 Oct, 2008 - 11:29 PM, said: Ok I have an assignment due.. and I need help tackling this assignment! Here is the info I have.. Im just really confused on how to start this project... Do you know how to make a struct? I would suggest that you start there & if you need help with making one, I used one in my store code snippet. If you have any questions or errors, please ask! #4 Re: Structures and Arrays Posted 07 October 2008 - 10:38 AM #include <iostream> using namespace std; bool addPerson (PERSON p); bool getPerson (PERSON &p); bool findPerson (char *frame, PERSON &p); bool findPerson (char *frame, char *lName, PERSON &p); void printBook(); struct PERSON { char fName[100]; char lName[100]; } const int MAXPEOPLE 3 PERSON people[MAXPEOPLE]; int head = 0; int tail = 0; bool addPerson (PERSON p) { if (head) = MAXPEOPLE) { return false; } people [head] = p; head ++; return true; } bool getPerson(PERSON &p) { if (head == 0) return false; if(tail) = MAXPEOPLE) { tail = 0; } P= people[tail]; tail ++; return true; } bool findPerson(char *lName, PERSON &p) { for (int i=0; i<=head; i++) { if(!stricmp(lName, people[i].lName) { p = people[]; return true; } } return false; } void printBook() { for (int i=0; i<=head; i++) { cout<<people[i].fName<<endl; cout<<people[i].lName<<endl; } } int main() { PERSON peeps[]= {{"Joe", "Blow"}, {"Sam", "Smith"}, {"Bill","Jones"}}; bool status = addPerson(peeps[0]); PERSON p; bool status= getPerson(p); if (status == true) cout<<p.lName<<" "<<p.fName; else; } ok heres what I have done so far.. I tried the code and its not working can you go over for it me thnx #5 Re: Structures and Arrays Posted 07 October 2008 - 11:55.. This post has been edited by Sadaiy: 07 October 2008 - 11:57 AM #6 Re: Structures and Arrays Posted 07 October 2008 - 01:35 PM I mentioned 3D arrays for a reason. Let's say the assignment does not specify what type of data structure to use. Let's say the OP only had dealt with arrays of primitives. He or she may decide that the best way to go is with a 3D array to store information. It is an example of the planning process that helps produce better code. #7 Re: Structures and Arrays Posted 07 October 2008 - 05:50 PM Sadaiy, on 7 Oct, 2008 - 11:55 AM,.. It only requires a header file... Yes i am doing this for c++ and I would like some input on my code and why it returns so many errors #8 Re: Structures and Arrays Posted 07 October 2008 - 05:56 PM Line 2 says it wants header and a CPP file... you can't put int main() in a header file it must be in the cpp, so put your code for your header also so i can see all the code... you didn't put up your class/ head file code.. also i think you should put the struct in the header.. Page 1 of 1
http://www.dreamincode.net/forums/topic/66528-structures-and-arrays/
CC-MAIN-2016-50
refinedweb
1,178
70.02
Now, let’s begin the fun part of this journey -- setting up our first masterpage so that we can begin to experiment (play) with HTML5 and CSS3. The first step is to create a copy of your preferred masterpage. Whether you prefer to start with “minimal.master”, v4.master, or an already branded masterpage of your own, you will want to create a copy to which you will be making some modifications. I perform most of my masterpage edits in SharePoint Designer 2010 so that I have access to the SharePoint Server and the results are at least reasonably rendered in the split-screen mode of the editor. Because of the limitations of that editor (not understanding the HTML5 schema and removing potentially risky code – as in all your HTML5 and CSS3 code) I set the default editor for CSS inside that editor as Expression Web 4 with SP1. This provides me the intellisense when working with the new CSS3 schema in my style sheets. In the illustration above, you can see one of the wonderful changes that came with HTML5. The DOCTYPE declaration is easy to remember. Browse to the DOCTYPE declaration in your new masterpage and change the line to read “<!DOCTYPE HTML>” (no quotes). That’s really al we need to alert the browser that we are interested in using the new HTML5 schema when rendering our page – EXCEPT for one more tiny “gotcha” . If you scroll down into the <Head> tag of your page, you should find this line –’ <meta http-’. This line instructs the browser to use IE8 mode when rendering your content. As mentioned previously, IE9 or higher is required to take advantage of our HTML5 markup. What we need to do is change that “IE8” to “IE9” or “IE10” depending on the browser you will be targeting with your page. I chose “IE9” for the purposes of this post and the result looks like this illustration. This is where the more advanced SharePoint Developers may choose to diverge a little from the path, but the concepts remain the same whether you are deploying these resources through the UI, SharePoint Developer, or Visual Studio. What we will be doing next is creating our folders for JavaScript libraries and files, CSS Style sheets, and any images that we need. I have chosen to NOT deploy these resources to the SharePoint Root (14) folder as a farm solution. My reasoning was to maintain compatibility with SharePoint Online and allow “tenant” solution activation and use of the masterpage solution from anywhere in the Site Collection. As a result, I used Visual Studio and provisioned my folders under "the “Site Assets” library. Regardless of your method of deployment, if you want to maintain compatibility with SharePoint Online, and/or reap the benefits of SharePoint monitoring and controlling your scripts that is available only for “sandboxed” solutions, then your resources should be deployed as a site scoped solution and stored in the content database. Using your preferred method (UI,SPD, VS2010), Deploy your content resources to the newly created folders. This will allow us to point our links and references to these assets from within our masterpage. You will have a different set of resources, but the concept remains the same. There is one file that you may find particularly interesting – test.js. I created this file to allow a way to determine if I was successfully accessing these resources from my masterpage. Because browsers generally fail “gracefully”, we want a way to determine whether our scripts have errors or whether they are perhaps not being included at all in the rendered content. I chose a very simple JavaScript alert function which I can call from my pages as a test. The simple JavaScript is below: function doit(){ alert("Hi there ... this loaded from the SiteAssets/js Folder"); }We’ll get into the details later, but, by including this link and calling my function, I can test whether my function was loaded successfully by SharePoint <SharePoint:ScriptLink Now we need to change our masterpage to include all those resources that we uploaded to our folders. If you followed my folder structures, you can use the examples from my masterpage as a guide in defining the paths to your content. Custom CSS Registration First we’ll reference our CSS files. By including <SharePoint:CssRegistration within the head tag of our masterpage, we are instructing SharePoint to insert our custom CSS file, pTech.css , after the default coreV4.css. This allows us to override the default styles with our definitions. Certain values may need an “!important” attribute to override the same attribute in coreV4.css. Linking JavaScript Files and Libraries The SharePoint namespace provides us a method to link our custom JavaScript to SharePoint – SharePoint:ScriptLink. The syntax is as shown below. Remember, we are storing these assets in the content database, so the “~site/” placeholder communicates to SharePoint to retrieve these assets from there. Linking images In most cases, you will probably be linking your images from your custom CSS file. In my case, that was the pTech.css file. In my experience, the most reliable way to access these images is similar to the approach below, which is the style that sets the pattern behind the s4-Title area. .s4-titletext { background-image:url('../../SiteAssets/Images/pattern3.png'); } STEP6 –Activating & setting your new default masterpage. If you used Visual Studio and did not activate the solution, you will need to activate it now from the Site Solutions Gallery. And if you did not set it as the Site Default Masterpage in the Web Solution Package, you will need to do that now. The easiest way to set and reset the Default Masterpage is through SharePoint Designer 2010. Select Master Pages from the left navigator, and in the right pane select the row by clicking anywhere except the file link. Once the row is selected, right-click the row and chose Select as Default Master Page from the drop down. if you browse to the home page in your browser and refresh the display, your new masterpage should be active and any new styles should be applied. If you have any difficulties, check to make sure the solution (if you deployed it as a solution) is activated. Then verify that you paths to your script files are being accessed. You can use the Test.js doIt() to test this. Finally, using your favorite browser tools, such as IE Developer Tools, verify that your styles are being applied and not overridden by the defaults. If time allows, I’ll package up modified solutions and WSP files of both the v4.Master and minimal.Master masterpages to make the effort a little easier for those folks facing challenges.
http://geekswithblogs.net/KunaalKapoor/archive/2012/04/19/changing-sharepoint-with---css3-html5-and-jquery.aspx
CC-MAIN-2014-52
refinedweb
1,126
61.46