text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
JavaScript Closures Demystified Closures are a somewhat advanced, and often misunderstood feature of the JavaScript language. Simply put, closures are objects that contain a function and a reference to the environment in which the function was created. However, in order to fully understand closures, there are two other features of the JavaScript language that must first be understood―first-class functions and inner functions. First-Class Functions In programming languages, functions are considered to be first-class citizens if they can be manipulated like any other data type. For example, first-class functions can be constructed at runtime and assigned to variables. They can also be passed to, and returned by other functions. In addition to meeting the previously mentioned criteria, JavaScript functions also have their own properties and methods. The following example shows some of the capabilities of first-class functions. In the example, two functions are created and assigned to the variables “foo” and “bar”. The function stored in “foo” displays a dialog box, while “bar” simply returns whatever argument is passed to it. The last line of the example does several things. First, the function stored in “bar” is called with “foo” as its argument. “bar” then returns the “foo” function reference. Finally, the returned “foo” reference is called, causing “Hello World!” to be displayed. var foo = function() { alert("Hello World!"); }; var bar = function(arg) { return arg; }; bar(foo)(); Inner Functions Inner functions, also referred to as nested functions, are functions that are defined inside of another function (referred to as the outer function). Each time the outer function is called, an instance of the inner function is created. The following example shows how inner functions are used. In this case, add() is the outer function. Inside of add(), the doAdd() inner function is defined and called. function add(value1, value2) { function doAdd(operand1, operand2) { return operand1 + operand2; } return doAdd(value1, value2); } var foo = add(1, 2); // foo equals 3 One important characteristic of inner functions is that they have implicit access to the outer function’s scope. This means that the inner function can use the variables, arguments, etc. of the outer function. In the previous example, the “value1″ and “value2″ arguments of add() were passed to doAdd() as the “operand1″ and “operand2″ arguments. However, this is unnecessary because doAdd() has direct access to “value1″ and “value2″. The previous example has been rewritten below to show how doAdd() can use “value1″ and “value2″. function add(value1, value2) { function doAdd() { return value1 + value2; } return doAdd(); } var foo = add(1, 2); // foo equals 3 Creating Closures A closure is created when an inner function is made accessible from outside of the function that created it. This typically occurs when an outer function returns an inner function. When this happens, the inner function maintains a reference to the environment in which it was created. This means that it remembers all of the variables (and their values) that were in scope at the time. The following example shows how a closure is created and used. function add(value1) { return function doAdd(value2) { return value1 + value2; }; } var increment = add(1); var foo = increment(2); // foo equals 3 There are a number of things to note about this example. - The add() function returns its inner function doAdd(). By returning a reference to an inner function, a closure is created. - “value1″ is a local variable of add(), and a non-local variable of doAdd(). Non-local variables refer to variables that are neither in the local nor the global scope. ”value2″ is a local variable of doAdd(). - When add(1) is called, a closure is created and stored in “increment”. In the closure’s referencing environment, “value1″ is bound to the value one. Variables that are bound are also said to be closed over. This is where the name closure comes from. - When increment(2) is called, the closure is entered. This means that doAdd() is called, with the “value1″ variable holding the value one. The closure can essentially be thought of as creating the following function. function increment(value2) { return 1 + value2; } When to Use Closures Closures can be used to accomplish many things. They are very useful for things like configuring callback functions with parameters. This section covers two scenarios where closures can make your life as a developer much simpler. Working With Timers Closures are useful when used in conjunction with the setTimeout() and setInterval() functions. To be more specific, closures allow you to pass arguments to the callback functions of setTimeout() and setInterval(). For example, the following code prints the string “some message” once per second by calling showMessage(). <!DOCTYPE html> <html lang="en"> <head> <title>Closures</title> <meta charset="UTF-8" /> <script> window.addEventListener("load", function() { window.setInterval(showMessage, 1000, "some message<br />"); }); function showMessage(message) { document.getElementById("message").innerHTML += message; } </script> </head> <body> <span id="message"></span> </body> </html> Unfortunately, Internet Explorer does not support passing callback arguments via setInterval(). Instead of displaying “some message”, Internet Explorer displays “undefined” (since no value is actually passed to showMessage()). To work around this issue, a closure can be created which binds the “message” argument to the desired value. The closure can then be used as the callback function for setInterval(). To illustrate this concept, the JavaScript code from the previous example has been rewritten below to use a closure. window.addEventListener("load", function() { var showMessage = getClosure("some message<br />"); window.setInterval(showMessage, 1000); }); function getClosure(message) { function showMessage() { document.getElementById("message").innerHTML += message; } return showMessage; } Emulating Private Data Many object-oriented languages support the concept of private member data. However, JavaScript is not a pure object-oriented language and does not support private data. But, it is possible to emulate private data using closures. Recall that a closure contains a reference to the environment in which it was originally created―which is now out of scope. Since the variables in the referencing environment are only accessible from the closure function, they are essentially private data. The following example shows a constructor for a simple Person class. When each Person is created, it is given a name via the “name” argument. Internally, the Person stores its name in the “_name” variable. Following good object-oriented programming practices, the method getName() is also provided for retrieving the name. function Person(name) { this._name = name; this.getName = function() { return this._name; }; } There is still one major problem with the Person class. Because JavaScript does not support private data, there is nothing stopping somebody else from coming along and changing the name. For example, the following code creates a Person named Colin, and then changes its name to Tom. var person = new Person("Colin"); person._name = "Tom"; // person.getName() now returns "Tom" Personally, I wouldn’t like it if just anyone could come along and legally change my name. In order to stop this from happening, a closure can be used to make the “_name” variable private. The Person constructor has been rewritten below using a closure. Note that “_name” is now a local variable of the Person constructor instead of an object property. A closure is formed because the outer function, Person() exposes an inner function by creating the public getName() method. function Person(name) { var _name = name; this.getName = function() { return _name; }; } Now, when getName() is called, it is guaranteed to return the value that was originally passed to the constructor. It is still possible for someone to add a new “_name” property to the object, but the internal workings of the object will not be affected as long as they refer to the variable bound by the closure. The following code shows that the ”_name” variable is, indeed, private. var person = new Person("Colin"); person._name = "Tom"; // person._name is "Tom" but person.getName() returns "Colin" When Not to Use Closures It is important to understand how closures work and when to use them. It is equally important to understand when they are not the right tool for the job at hand. Overusing closures can cause scripts to execute slowly and consume unnecessary memory. And because closures are so simple to create, it is possible to misuse them without even knowing it. This section covers several scenarios where closures should be used with caution. In Loops Creating closures within loops can have misleading results. An example of this is shown below. In this example, three buttons are created. When “button1″ is clicked, an alert should be displayed that says “Clicked button 1″. Similar messages should be shown for “button2″ and “button3″. However, when this code is run, all of the buttons show “Clicked button 4″. This is because, by the time one of the buttons is clicked, the loop has finished executing, and the loop variable has reached its final value of four. <!DOCTYPE html> <html lang="en"> <head> <title>Closures</title> <meta charset="UTF-8" /> <script> window.addEventListener("load", function() { for (var i = 1; i < 4; i++) { var button = document.getElementById("button" + i); button.addEventListener("click", function() { alert("Clicked button " + i); }); } }); </script> </head> <body> <input type="button" id="button1" value="One" /> <input type="button" id="button2" value="Two" /> <input type="button" id="button3" value="Three" /> </body> </html> To solve this problem, the closure must be decoupled from the actual loop variable. This can be done by calling a new function, which in turn creates a new referencing environment. The following example shows how this is done. The loop variable is passed to the getHandler() function. getHandler() then returns a closure that is independent of the original “for” loop. function getHandler(i) { return function handler() { alert("Clicked button " + i); }; } window.addEventListener("load", function() { for (var i = 1; i < 4; i++) { var button = document.getElementById("button" + i); button.addEventListener("click", getHandler(i)); } }); Unnecessary Use in Constructors Constructor functions are another common source of closure misuse. We’ve seen how closures can be used to emulate private data. However, it is overkill to implement methods as closures if they don’t actually access the private data. The following example revisits the Person class, but this time adds a sayHello() method which doesn’t use the private data. function Person(name) { var _name = name; this.getName = function() { return _name; }; this.sayHello = function() { alert("Hello!"); }; } Each time a Person is instantiated, time is spent creating the sayHello() method. If many Person objects are created, this becomes a waste of time. A better approach would be to add sayHello() to the Person prototype. By adding to the prototype, all Person objects can share the same method. This saves time in the constructor by not having to create a closure for each instance. The previous example is rewritten below with the extraneous closure moved into the prototype. function Person(name) { var _name = name; this.getName = function() { return _name; }; } Person.prototype.sayHello = function() { alert("Hello!"); }; Things to Remember - Closures contain a function and a reference to the environment in which the function was created. - A closure is formed when an outer function exposes an inner function. - Closures can be used to easily pass parameters to callback functions. - Private data can be emulated by using closures. This is common in object-oriented programming and namespace design. - Closures should be not overused in constructors. Adding to the prototype is a better idea. - Jon Thomas - Pascal - Lee Kowalkowski
http://www.sitepoint.com/javascript-closures-demystified/
CC-MAIN-2014-15
refinedweb
1,878
58.38
Computer science in JavaScript: Credit card number validation Credit cards on the web sites have become just about as ubiquitous as sign-in forms. One of my favorite moments in computer science was learning the algorithm for determining a valid credit card number. The process doesn’t involve making a call to a server or checking accompanying information, just a basic algorithm that uses a check digit to determine if the credit card number is in the correct format. Identifier format Credit card numbers, just like other magnetic stripe card, have an identifier format that is defined in ISO/IEC 7812. The format for such identifiers is made up of three parts: - Issuer Identification Number (IIN) – an identifier indicating the institution that issued the number. The first digit indicates the type of institution issuing the number (for instance, banks are either 4 or 5, so all credit card numbers begin with one of these). The IIN contains six digits. - Account Number – an identifier between 6 and 12 numbers long, inclusive. - Check Digit – a single digit to validate the sum of the identifier. Identifiers of this format can be between 13 and 19 digits long and used for any number of purposes, though most people deal strictly with credit card numbers. Luhn algorithm Hans Peter Luhn, a scientist at IBM, developed Luhn algorithm to protect against unintentional mistakes in numeric identifiers (it is not a secure algorithm). This algorithm is the basis for magnetic strip identification cards, such as credit cards, as defined in ISO/IEC 7812. Luhn algorithm itself is quite simple and straightforward. Starting at the last digit in the identifier (the check digit), double every other digit’s value. If any of the doubled digits are greater than nine, then the number is divided by 10 and the remainder is added to one. This value is added together with the appropriate values for every other digit to get a sum. If this sum can be equally divisible by 10, then the number is valid. The check digit serves the purpose of ensuring that the identifier will by equally divisible by 10. This can be written in JavaScript as follows: //Luhn algorithm identifier verification //MIT Licensed function isValidIdentifier(identifier) { var sum = 0, alt = false, i = identifier.length-1, num; if (identifier.length < 13 || identifier.length > 19){ return false; } while (i >= 0){ //get the next digit num = parseInt(identifier.charAt(i), 10); //if it's not a valid number, abort if (isNaN(num)){ return false; } //if it's an alternate number... if (alt) { num *= 2; if (num > 9){ num = (num % 10) + 1; } } //flip the alternate bit alt = !alt; //add to the rest of the sum sum += num; //go to next digit i--; } //determine if it's valid return (sum % 10 == 0); } This method accepts a string identifier as its argument and returns a Boolean value indicating if the number it represents is valid. The argument is a string to allow easier parsing of each digit and to allow leading zeroes to be significant. Sample usage (sorry, no real numbers here): if (isValidIdentifier("0123765443210190")){ alert("Valid!"); } Yes, I did test this on my own credit card numbers as a test. No you can’t have those sample files. Validation not verification Keep in mind that Luhn algorithm is a validating algorithm, not a verifying one. Just because an identifier is in a valid format doesn’t mean that it’s a real identifier that’s currently in use. Luhn algorithm is best used to find unintentional errors in identifiers rather than providing any level of security. As with other parts of my computer science in JavaScript series, I’m not condoning its usage in real web applications for any reason, just introducing it as an interesting computer science topic that can be implemented in JavaScript. This code, along with others from this series, is available on my GitHub Computer Science in JavaScript project.
https://humanwhocodes.com/blog/2009/08/04/computer-science-in-javascript-credit-card-number-validation/
CC-MAIN-2019-30
refinedweb
652
52.7
This example demonstrates how to delete cells from a worksheet. To do this, use the Worksheet.DeleteCells method. Pass a cell or cell range that you want to delete, and the DeleteMode.ShiftCellsUp or DeleteMode.ShiftCellsLeft enumeration member to specify how to shift other cells. using DevExpress.Spreadsheet; // ... IWorkbook workbook = spreadsheetControl1.Document; Worksheet); Imports DevExpress.Spreadsheet ' ... Dim workbook As IWorkbook = spreadsheetControl1.Document Dim worksheet As) To delete only the cell content or formatting without removing entire cells from the worksheet, use the Clear* methods of the Worksheet object (see the How to: Clear Cells of Content, Formatting, Hyperlinks and Comments example).
https://documentation.devexpress.com/WindowsForms/15404/Controls-and-Libraries/Spreadsheet/Examples/Cells/How-to-Delete-a-Cell-or-Range-of-Cells
CC-MAIN-2018-43
refinedweb
101
52.46
declare apparent at use sites which are at a distance from this declaration site. - Nothing stops a user creating the silly type List Int Int even though the intention is that the second argument is structured out of Succs and Zeros. Basic proposal. In the basic proposal the data kind declaration has no kind parameters. (See below for kind polymorphism.) above), but may also collect constraints for the kind inference system. Kind inference Kind inference figures out the kind of each type variable. There are often ambiguous cases: data T a b = MkT (a b) These are resolved by Haskell 98 with (a :: *->*) and (b :: *). We propose no change. But see kind polymorphism below.) This example is ill-kinded though: class Bad x -- Defaults to x::* instance Bad Int -- OK instance Bad Zero -- BAD: ill-kinded kind ::= * | kind -> kind | forall k. kind | k sort ::= ** | sort -> sort Choices - What to use for the sort that classifies *, *->* etc? - *2 (as in Omega; but *2 isn't a Haskell lexeme) - ** (using unary notation) - *s (Tristan) - kind (use a keyword) - Do we have sort polymorphism? Many simple data declarations it would be convinient to also have at the type level. Assuming we resolve the TypeNaming and ambiguity issues above, we could support automatically deriving the data kind based on the data. There are some other issues to be wary of (care of Simon PJ): - Auto lifting of: data Foo = Foo Int Automated lifting this would try and create a kind Foo with an associated type Foo. But we've just declared a type Foo in the data declaration. - Automatic lifting of GADTs / existentials and parametric types is tricky until we have a story for them. - Automatic lifting of some non-data types could be problematic (what types parameterise the kind Int or Double?) - We have no plan to auto-lift term *functions* to become type functions. So it seems odd to auto-lift the type declarations which are, after all, the easy bit. Syntactically however, there are some options for how to do this in cases when it is safe to do: Option 0: Always promote [when safe] E.g. writing data Foo = Bar | Baz will impliclty create a kind Foo and types Bar and Baz Option 1: Steal the deriving syntax This has an advantage of allowing standalone deriving for those data types that are declared elsewhere but not with Kind equivalents data Maybe a = Nothing | Just a deriving (Kind) deriving instance (Kind Bool) Option 2: Add an extra flag to the data keyword data and kind Maybe a = Nothing | Just a This has the problems of verbosity and is hard to apply after the fact to an existing data type. Kind Synonyms Simple type synonyms have a natural analogy at the kind level that could be a useful feature to provde. Depending on whether we keep the kind and type namespaces separate (above) we could just abuse the current type Foo = Either Baz Bang syntax to also allow creating kind synonyms, or if we need to invent some new syntax. kind Foo = Either Baz Bang would seen natural, or perhaps more safely type kind Foo = Either Baz Bang. newkind doesn't make sense to add as there is no associated semantics to gain at the type level that data kind doesn't already provide.?
https://ghc.haskell.org/trac/ghc/wiki/KindSystem?version=15
CC-MAIN-2015-27
refinedweb
552
58.82
one active GameObject tagged tag. Returns null if no GameObject was found. Tags must be declared in the tag manager before using them. A UnityException is thrown if the tag does not exist or an empty string or null is passed as the tag. Note: This method returns the first GameObject it finds with the specified tag. If a scene contains multiple active GameObjects with the specified tag, there is no guarantee this method will return a specific GameObject. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public GameObject respawnPrefab; public GameObject respawn; void Start() { if (respawn == null) respawn = GameObject.FindWithTag("Respawn"); Instantiate(respawnPrefab, respawn.transform.position, respawn.transform.rotation); } }
https://docs.unity3d.com/ScriptReference/GameObject.FindWithTag.html
CC-MAIN-2021-21
refinedweb
111
51.44
CodePlexProject Hosting for Open Source Software Is it possible to map a view to a VM without navigating to the view? I am adding the view as UserControl to a page. I would like the view and VM to be mapped using the nRoute mechanisms if possible. Thanks in advance for any help. Absolutely, in fact navigation and View-ViewModel pairing is independent of each other - which is to say, you can create View-ViewModel pairing (use either MapView, MapViewModel or DefineViewViewModel attributes) without using navigation and you can earmark a visual as being navigable (using MapNavigationContent or MapNavigationResource attributes) without having to use backing ViewModel. Rishi I had tried that but it wasn't working for some reason. However I think I might use another approach to solve this issue. Can you let me know what you think please? I am creating a dashboard that can display any number of widgets. When the dashboard is loaded the layout it deserialized and for each widget placeholder a LoadWidget event is triggered passing the widgets ID. What I was planning to do is retrieve the widget using the ID and then set the widget as the content for each widget placeholder. But now I am thinking that a better approach might be to add a statefulContainer to each widget placeholder within the dashboard and then set the URL to be the widget view URL and the widget ID as a parameter. This approach will instantiate a new view and VM passing in the widget ID for each Widget within the dashboard. Huh, not working.. if you can use it with navigation then you should be able to use it without - since they are not directly connected. For example map your ViewModel: [MapViewModel(typeof(SampleView))] public class SampleViewModel { // your ViewModel stuff goes here } and then in your SampleView control: <UserControl ... > <i:Interaction.Behaviors> <n:BridgeViewModelBehavior /> </i:Interaction.Behaviors> <!-- Your View goes here --> </UserControl> and that's it - it should all work. Well, as for your dashboard question - I've done something similar to that, and it works quite well. However, if you don't need to use state-management functionality you can use a light weight navigation container (like NavigationContentControl) or even just write a simple navigation adapter for use with a placeholder. Hope this helps, Rishi Thanks for the help. The problem I was having wth mapping to the VM was caused because a service that is Initialised "whenAvailable" was trying to use the view and I am guessing that the VM hadn't been "discovered" yet by nRoute. I changed the service to be on demand and eveything works ok now. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://nroute.codeplex.com/discussions/234799
CC-MAIN-2017-17
refinedweb
477
62.58
. There are no VVV packages in the Ubuntu, openSUSE, or Fedora repositories. For this article I'll build the application from source on a 64-bit Fedora 9 machine. The build instructions cover the dependencies for VVV and how to build it with KDevelop. The dependencies are wxWidgets and wxGTK for the GUI, the Firebird relational database to store your DVD index, and id3lib to extract and index ID3 information. The source distribution contains a copy of id3lib 3.8.3 that includes a bug fix for VVV, so I'll link against that id3lib in the build. The build instructions also mention options to build a statically linked executable that is less reliant on other libraries on the system. Building such an executable might be handy if you plan to store the application on a USB flash key that you might use on many systems. As wxGTK is dependent on wxWidgets, you can use your distribution's dependency tracker to install both at once. wxGTK is available for Ubuntu Hardy, as a 1-Click for openSUSE 11, and in the standard Fedora 9 repositories. You will need to get version 2.0.x of Firebird in its Classic build, which is designed to allow applications to directly open the relational database, so no separate database server is required. Firebird is packaged for Ubuntu Hardy and as a 1-Click install for openSUSE 11, but not in the Fedora 9 repositories. I'll use the 64-bit build of Firebird version 2.0.4. Version 2.0.4 of Firebird uses the standard C++ shared library, and you must have the correct version installed. Fedora 9 has moved to version /usr/lib64/libstdc++.so.6, but Firebird needs libstdc++.so.5. On Fedora 9 the older version of the standard C++ library is contained in the compat-libstdc++ package. Installation (as well as the error message you see if you do not have the correct stdc++ library version) is shown below: # tar xzvf /FromWeb/FirebirdCS-2.0.4.13130-1.amd64.tar.gz # cd FirebirdCS* FirebirdCS-2.0.4.13130-1.amd64]# ./install.sh Firebird classic 2.0.4.13130-1.amd64 Installation /opt/firebird/bin/gsec: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory FirebirdCS-2.0.4.13130-1.amd64]# yum install compat-libstdc++ ... FirebirdCS-2.0.4.13130-1.amd64]# ./install.sh ... Please enter new password for SYSDBA user: himitsu Install completed $ sudo vi /usr/include/wx-2.8/wx/string.h ... about line 822 ... wxChar& operator[](int n) { return wxStringBase::at(n); } // wxChar& operator[](size_type n)// { return wxStringBase::at(n); } src]$ export LDFLAGS="$(wx-config --libs --debug=no) -lid3 -L/opt/firebird/lib -lfbembed " src]$ export CXXFLAGS="$(wx-config --cxxflags --debug=no) -DIBPP_LINUX" src]$ g++ -c *.cpp $CXXFLAGS $LDFLAGS src]$ cd ./data_interface data_interface]$ g++ -c *.cpp $CXXFLAGS $LDFLAGS $ cd ../ibpp/ ibpp]$ cd ./core core]$ vi _ibpp.h /////////////////////////////////////////////////////////////////////////////// #include <string.h> #ifndef __INTERNAL_IBPP_H__ #define __INTERNAL_IBPP_H__ core]$ g++ -c all_in_one.cpp $CXXFLAGS $LDFLAGS core]$ cd ../../ src]$ gcc `find . -name "*.o"` -o vvv $CXXFLAGS $LDFLAGS src]$ mkdir ~/vvv-catalogs src]$ cp -av ./vvv ~/vvv-catalogs `./vvv' -> `/home/ben/vvv-catalogs/vvv' src]$ cd .. VVV-0.9-src]$ cp -av VVV.fbk ~/vvv-catalogs `VVV.fbk' -> `/home/ben/vvv-catalogs/VVV.fbk' The simplest way to use VVV with Firebird is to set up Firebird to be used in embedded mode. This means that Firebird does not run as a separate database server -- instead, VVV accesses the database using the shared libraries of Firebird. This way you don't have to deal with relational database users and permissions or have the server running somewhere. Unfortunately the configuration files to run Firebird in this way are not part of the source distribution. The simplest way to get going is to download VVV's binary distribution and copy the database configuration files from it together with the shared libraries of Firebird from your local installation, as shown below: ~]$ cd /tmp tmp]$ tar xzf /.../VVV-0.9.tar.gz tmp]$ cd ./VVV* VVV-0.9]$ mkdir ~/vvv-catalogs/firebird VVV-0.9]$ cp -av firebird.log firebird.conf ~/vvv-catalogs VVV-0.9]$ cp -av firebird/security2.fdb ~/vvv-catalogs/firebird VVV-0.9]$ mv vvv vvv.bin VVV-0.9]$ vi vvv #!/bin/sh export FIREBIRD=~/vvv-catalogs $FIREBIRD/vvv.bin VVV-0.9]$ su -l /root]# cd /opt/firebird firebird]# cp -av lib/* bin intl /home/ben/vvv-catalogs/firebird firebird]# chown -R ben.ben /home/ben/vvv-catalogs/firebird As you can see, installing VVV from source is nowhere near as simple is it should be. Not having makefiles in the source tree and having to swipe firebird.conf and security2.fdb from the binary distribution makes working out how to get things going quite a task. There is also no hint in the installation instructions about how you might like to use a wrapper script like the vvv script I created above, to start VVV with the Firebird environment variable set appropriately. At this stage you should be able to execute ~/vvv-catalogs/vvv and have VVV start using an embedded version of Firebird. You do not need to start any servers or create any database users. The first thing you should do is click the New button in the toolbar to create a new VVV catalog to store your disc indexes in. If you can create the catalog then your installation of VVV can use the embedded version of Firebird correctly. VVV is starting to get internationalization support. If VVV does not select your language automatically, you can explicitly set it in the Preferences window> From that window you can also uncheck the default option to reopen the last used catalog at startup, specify which ID3 tags should be extracted from audio files as they are being cataloged, and specify your location and login credentials if you are using Firebird as a standalone server. The Volumes menu allows you to add a directory to the catalog and give it a volume name. Clicking on the Catalog button in the toolbar does the same thing. I added a directory containing the Linux HOWTO files in both HTML and PDF formats (a total of 8,119 files and 313 directories). Using a cold disk cache, the cataloging took about 25 seconds. There are three main views in VVV -- Physical, Virtual, and Search -- all available by those names in the toolbar, as shown here. This screenshot shows that, along with the HOWTO directory, I cataloged two subdirectories as individual volumes. Clicking on Search in the toolbar lets you find which volumes contain the files you are after. Search mode is shown in the second screenshot. The first part of the physical path shows the volume that contains the result. By clicking on each column header you can sort the view in ascending or descending order by that column. For each file, you can manually enter a description, which is what is displayed in the rightmost column in the screenshot. The third mode, Virtual, takes some getting used to. Using virtual folders lets you organize your DVD catalog more logically. Suppose you have six discs that contain the digital photos of your last trip all indexed inside a single "Japan 2008 images" virtual folder. If you edit some of the images, for example removing some red eye, and burn the updated images to a new disc later, you can then add those images to the same Japan 2008 images virtual folder, even though they might be on a DVD you burn two years later. You can create new root folders by right-clicking on the leftmost list. To add a tree to a virtual directory you have to go back to Physical mode, right-click on a file or directory, and select "Add to Virtual Folder." You will then be shown a tree view in which you can select which virtual folder you want to add to. You can add a physical file or directory to multiple virtual folders. VVV can index a directory quickly. Its ability to perform "starts with" and "contains" searches lets you find which disc contains a file you are after. The ability to search for files based on a regular expression would be a welcome addition for power users.
https://www.linux.com/news/find-dvd-containing-those-files-vvv
CC-MAIN-2017-13
refinedweb
1,392
65.62
How do you prevent the Slider from returning “ugly” values, and get it to “snap” to only the values that you want, such as integers, or multiples of a certain step value such as 0.125? One way to do this is to override OnValueChanged, but it helps to understand how that mechanism works. Below is an example of subclassing Slider to add that functionality. It overrides OnValueChanged to alter the behavior of Slider to return only multiples of the SmallChange value, and the Thumb will snap to those values. When the Value property changes, whether by the user dragging the Thumb or programmatically, the OnValueChanged virtual method is called. The Slider’s implementation of OnValueChanged will raise the ValueChanged event, which is how most apps respond to the user moving the Slider’s Thumb. The OnValueChanged method has the old and new values in its parameters. To get the default Slider behavior, if you override OnValueChanged you pass those two parameters back to the base implementation of OnValueChanged. But what if you don’t call base.OnValueChanged, and what if you change the values that you pass back? For starters, if you don’t call the base implementation, the ValueChanged event will not get fired. The Value property will change, but there will no event raised. It turns out that passing different values in base.OnValueChanged is only partially useful–if you pass different values, those are the values are used to make the RoutedPropertyChangedEventArgs, but it has no effect on the Value property. So if you want to modify the Value property, you’ll have to set it yourself, but doing so inside of the ValueChanged override will cause reentrancy By overriding the OnValueChanged, changing the values that you pass back to base.OnValueChanged, and changing the Value property (provided you take care of reentrancy), you can determine when the ValueChanged event is raised, and the values that the Value property will take. Let’s take a look at some of the code. This snippet does the work of converting the new value to a multiple of SmallChange: double newDiscreteValue = (int)(Math.Round(newValue / SmallChange)) * SmallChange; After we’ve done that, we check to see if the new discrete value is different from our old one. In other words, the Slider may think that the Value has changed from 4.0 to 4.2, but if our SmallChange value is .5, then we want to stay at 4.0. We set the new Value (this is what causes the reentrancy), call base.OnValueChanged, then save the discrete value for next time: if (newDiscreteValue != m_discreteValue) { Value = newDiscreteValue; base.OnValueChanged(m_discreteValue, newDiscreteValue); m_discreteValue = newDiscreteValue; } Notice that if the discrete value did not change, we do not call base, so the event is not fired, and the Thumb does not move. When we set the Value property, the OnValueMethod will get called again while we’re still in it, so we mitigate against this by using the m_busy flag. Here’s the code: using System; using System.Windows; using System.Windows.Controls; namespace TransitionApp { public class DiscreteSlider : Slider { protected override void OnValueChanged(double oldValue, double newValue) { if (!m_busy) { m_busy = true; if (SmallChange != 0) { double newDiscreteValue = (int)(Math.Round(newValue / SmallChange)) * SmallChange; if (newDiscreteValue != m_discreteValue) { Value = newDiscreteValue; base.OnValueChanged(m_discreteValue, newDiscreteValue); m_discreteValue = newDiscreteValue; } } else { base.OnValueChanged(oldValue, newValue); } m_busy = false; } } bool m_busy; double m_discreteValue; } } Man… I didn’t read the whole post, but I think the key is IsSnapToTickEnabled="True", at least in WPF. Dunno if SL supports it, guess not :(. Try this: <Slider Minimum="1" Maximum="60" Value="14" IsSnapToTickEnabled="True" SmallChange="1" LargeChange="5" /> Yep. This release of Silverlight doesn’t have it. Sorry, but it only half works: while dragging, thumb is snapped to small changes, but actual value may be anything in between. After dragging value stays non-discrete. Tested on Silverlight 4. Confirmed that it behaves the way vienigais describes. It's easy to correct though. Change the line if (newDiscreteValue != m_discreteValue) into if (newDiscreteValue != Value) and it behaves correctly.. Toto's solution works for dragging the slider, but on Windows Phone Mango, clicking on the slider bar to move by SmallValue doesn't work properly, the slider stays where it is but the value changes. To fix this, I changed the comparison line like this: if (newDiscreteValue!=Value || Value!=oldValue) This allowed me to drag the slider as well as click to the left or right to change the value. Additionally, if you want to avoid comparison two floats for inequality: if (Math.Abs(newDiscreteValue – Value) > 0.000001 || Math.Abs(Value – oldValue) > 0.000001) I've no idea whether 0.000001 is the "right" value, it just worked for me 🙂 double comparisons should be done with if(Math.Abs(newDiscreteValue – Value) > double.Epsilon)
https://blogs.msdn.microsoft.com/devdave/2008/06/12/discreteslider-adding-functionality-with-a-simple-control-subclass/
CC-MAIN-2016-50
refinedweb
797
58.38
I'm working on a simple encryption program (for school). It is not currently "homework" just FYI, I'm just practicing the examples in the book. It uses a Caesar's Cipher method (an alphabetical shift). I have not tested this code very much. If something is wrong with it I probably don't know about it. The problem is when this programs runs, it doesn't print the entire message. It prints the first word and then nothing. I just got a hunch in that perhaps it doesn't handle whitespace correctly though... T0he code: import java.util.*; import Utility.Utility; //Import my utility class public class Cipher { char forDecrypting[] = new char [26]; //Standard reference alphabet for performing operations on char forEncrypting[] = new char [26]; //reference alphabet for performing DEcryption char shift = 0; public Cipher () { this(3); //Call argument constructor this.shift = 3; } public Cipher (int shift) { /*Initializes encoding alphabets using an argument specified shift*/ int c = 0; for(int j = 0; j < 26; j++) { this.forDecrypting[c] = (char) ((int) 'A' + (j + shift) % 26); //Store the regular alphabet for decrypting this.forEncrypting[c] = (char) ((int) 'A' + (j - shift + 26 ) % 26); //Store the shifted alphabet for encrypting c+=1; } } public char transform (boolean encrypting, char ch, int msgLen) { //Changes the char in to its encrypted equivalent int index = 0; if(encrypting) { //If we want to encrypt the message index = Utility.Clamp( ((int) ( ch - 'A')), 0, 25); System.out.print(" index = " + ((char )index) + " " ); return forEncrypting[index]; //Return the encrypted char } else { index = Utility.Clamp( ((int) ( ch - 'A')), 0, 25); System.out.print(" index = " + ((char )index) + " " ); return forDecrypting[index]; //Else return the decrypted char } } public String encode (String plaintext) { char [] message = plaintext.toUpperCase().toCharArray(); //Put the plaintext in to an array char [] cipher = new char [message.length]; for(int p = 0; p <= message.length - 1; p++) { //Iterate through the message, replacing each letter with its corresponding encrypted char cipher[p] = transform(true, message[p], message.length - 1); } return new String(cipher); } public String decode (String ciphertext) { char [] cipher = ciphertext.toUpperCase().toCharArray(); //Put the plaintext in to an array char [] plaintext = new char [cipher.length]; for(int p = 0; p <= cipher.length - 1; p++) { //Iterate through the message, replacing each letter with its corresponding encrypted char plaintext[p] = transform(false, cipher[p], cipher.length - 1); } return new String(plaintext); } public static void main (String[] args) { System.out.print("A = " +((int) 'A') + " Z = " + ((int)'Z') ); Cipher myCipher = new Cipher(); String ciphertext = new String(myCipher.encode(args[0])); System.out.println(" Cipher is: " + ciphertext + " Message is: " + new String (myCipher.decode(ciphertext)) ); } } The Utility.Clamp function simply forces the value to a range. IE. If the range is from zero to 25, and we specify 100, we get back 25, or if it is -9 we get back zero. If the value is anywhere in the range then it just returns the input value. Also these are my programs arguments: "If you can read this then crap if not then good" and sample output as of now: "A = 65 Z = 90 index = index = index = index = Cipher is: FC Message is: IF " Edited by Curious Gorge
https://www.daniweb.com/programming/software-development/threads/499218/why-does-this-program-fail-to-display-the-entire-message
CC-MAIN-2017-47
refinedweb
519
56.86
Hide Forgot Spec URL: SRPM URL: Description: ensmallen is a header-only C++ library for efficient mathematical optimization. It provides a simple set of abstractions for writing an objective function to optimize. It also provides a large set of standard and cutting-edge optimizers that can be used for virtually any mathematical optimization task. These include full-batch gradient descent techniques, small-batch techniques, gradient-free optimizers, and constrained optimization. Fedora Account System Username: rcurtin This is being packaged as a new dependency of mlpack (it used to be a part of mlpack, now it is standalone). Once this is approved and merged, I'll be able to update mlpack in Fedora/RHEL to its latest version. If there are any issues I'll get them handled quickly. Thanks! - The URL and Source look like they can use https://. - rm -rf $RPM_BUILD_ROOT should be removed - You do not need to install LICENSE.txt manually; just list it as a relative path and it will be installed for you. - Summary should not end in a period. - If you've built the tests, why not run them in %check? - %changelog is missing version-release - This fails to build for me due to missing armadillo; why is it not in BuildRequires also? (I don't think a new enough armadillo is packaged in Fedora, though.) Hi Elliott, Thanks so much for your quick review. I handled each of the issues you pointed out and updated and. > - The URL and Source look like they can use https://. Yeah, you're right, fixed. > - rm -rf $RPM_BUILD_ROOT should be removed Oops, didn't realize this was unnecessary. Gone now. > - You do not need to install LICENSE.txt manually; just list it as a relative path and it will be installed for you. Ah, nice! I didn't know that. Thanks. > - Summary should not end in a period. Fixed. > - If you've built the tests, why not run them in %check? Good point, they run in %check now. > - %changelog is missing version-release Oops, simple oversight. Fixed now. > - This fails to build for me due to missing armadillo; why is it not in BuildRequires also? (I don't think a new enough armadillo is packaged in Fedora, though.) Ah, yeah, you're right, Armadillo should be `BuildRequires` also. It is now, and the versions in Fedora should be new enough that it isn't a problem (6.500.0 or newer is necessary, and that was quite a while ago now). Let me know if you see any other issues and I'll get them handled. Thanks again! Ryan Putting %package and %description all the way at the end there is a bit odd; it's usually before %prep. Unfortunately, the tests segfault for me: + ./ensmallen_tests ensmallen version: 1.14.2 (Difficult Crimp) armadillo version: 9.400.3 (Surrogate Miscreant) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ensmallen_tests is a Catch v2.4.1 host application. Run with -? for options ------------------------------------------------------------------------------- SmallLovaszThetaSdp ------------------------------------------------------------------------------- /builddir/build/BUILD/ensmallen-1.14.2/tests/sdp_primal_dual_test.cpp:286 ............................................................................... /builddir/build/BUILD/ensmallen-1.14.2/tests/sdp_primal_dual_test.cpp:286: FAILED: due to a fatal error condition: SIGSEGV - Segmentation violation signal =============================================================================== test cases: 102 | 101 passed | 1 failed assertions: 5732 | 5731 passed | 1 failed BUILDSTDERR: /var/tmp/rpm-tmp.xRlIat: line 31: 628 Segmentation fault (core dumped) ./ensmallen_tests Hi Elliott, I spent a while working with it but I can't reproduce the segfault at all (I ran with tons of different random seeds). Do you think we should then just ignore running the tests in this case? I should add, since ensmallen is header-only, the tests are primarily of importance for ensuring everything compiles. They are also tests of the internal optimizers, but given that during development the issue you see hasn't been encountered at all, I'm not sure we can effectively resolve it at the packaging level. We could report it upstream (although I am part of upstream :)) but I have no idea how long it will take to even reproduce, and I don't know if it makes sense to hold up adding the package on that. (Especially since adding ensmallen is a dependency for new versions of mlpack, so mlpack can't be updated until then.) Let me know what you think, and thanks again for your time. :) Here is the backtrace if it's helpful? It looks like something is going wrong in armadillo though. #0 0x000055555564ebb4 in arma::Mat<double>::~Mat (this=0x1, __in_chrg=<optimized out>) at /usr/include/armadillo_bits/Mat_meat.hpp:23 #1 0x000055555574a594 in arma::auxlib::chol_band_common<double> (layout=<optimized out>, KD=<optimized out>, X=...) at /usr/include/bits/string_fortified.h:34 #2 arma::auxlib::chol_band<double> (layout=<optimized out>, KD=<optimized out>, X=...) at /usr/include/armadillo_bits/auxlib_meat.hpp:2008 #3 arma::op_chol::apply_direct<arma::Mat<double> > (A_expr=..., layout=<optimized out>, out=...) at /usr/include/armadillo_bits/op_chol_meat.hpp:72 #4 arma::op_chol::apply_direct<arma::Mat<double> > (layout=<optimized out>, A_expr=..., out=...) at /usr/include/armadillo_bits/op_chol_meat.hpp:43 #5 arma::chol<arma::Mat<double> > (out=..., X=..., layout=layout@entry=0x55555578b6e4 "lower") at /usr/include/armadillo_bits/fn_chol.hpp:59 #6 0x000055555573ae70 in ens::Alpha (A=..., dA=..., tau=0.98999999999999999, alpha=@0x7fffffffb900: 4.9406564584124654e-323) at /builddir/build/BUILD/ensmallen-1.14.2/include/ensmallen_bits/sdp/primal_dual_impl.hpp:127 #7 0x0000555555760ae6 in ens::PrimalDualSolver<ens::SDP<arma::Mat<double> > >::Optimize (this=this@entry=0x7fffffffd620, X=..., ysparse=..., ydense=..., Z=...) at /usr/include/armadillo_bits/Glue_meat.hpp:47 #8 0x000055555573f9df in ____C_A_T_C_H____T_E_S_T____2 () at /builddir/build/BUILD/ensmallen-1.14.2/tests/sdp_primal_dual_test.cpp:296 #9 0x00005555555ee1f7 in Catch::TestCase::invoke (this=<optimized out>) at /usr/include/c++/9/bits/shared_ptr_base.h:1020 #10 Catch::RunContext::invokeActiveTestCase (this=0x7fffffffe0c0) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:9745 #11 0x0000555555601c7f in Catch::RunContext::runCurrentTest (this=0x7fffffffe0c0, redirectedCout="", redirectedCerr="") at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:9719 #12 0x0000555555612280 in Catch::RunContext::runTest (this=0x7fffffffe0c0, testCase=...) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:9495 #13 0x00005555556167de in Catch::(anonymous namespace)::runTests (config=std::shared_ptr<Catch::Config> (use count 4, weak count 0) = {...}) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:10035 #14 Catch::Session::runInternal (this=0x7fffffffe320) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:10236 #15 0x0000555555616c6f in Catch::Session::run (this=0x7fffffffe320) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:10193 #16 0x00005555555e0ddb in Catch::Session::run (argv=0x7fffffffe598, argc=1, this=0x7fffffffe320) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:10161 #17 Catch::Session::run (argv=0x7fffffffe598, argc=1, this=0x7fffffffe320) at /builddir/build/BUILD/ensmallen-1.14.2/tests/catch.hpp:10156 #18 main (argc=1, argv=0x7fffffffe598) at /builddir/build/BUILD/ensmallen-1.14.2/tests/main.cpp:33 What versions of everything do you get? Is one of us outdated somewhere? Dependencies resolved. ====================================================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================================================== Installing: armadillo-devel x86_64 9.400.3-1.fc31 fedora 1.4 M cmake x86_64 3.14.4-1.fc31 fedora 8.9 M gcc-c++ x86_64 9.1.1-1.fc31 fedora 12 M Installing dependencies: SuperLU x86_64 5.2.1-6.fc30 fedora 169 k SuperLU-devel x86_64 5.2.1-6.fc30 fedora 23 k annobin x86_64 8.76-1.fc31 fedora 180 k armadillo x86_64 9.400.3-1.fc31 fedora 26 k arpack x86_64 3.5.0-6.fc28 fedora 195 k arpack-devel x86_64 3.5.0-6.fc28 fedora 12 k atlas x86_64 3.10.3-8.fc30 fedora 6.3 M atlas-devel x86_64 3.10.3-8.fc30 fedora 1.5 M blas x86_64 3.8.0-11.fc30 fedora 415 k blas-devel x86_64 3.8.0-11.fc30 fedora 15 k cmake-data noarch 3.14.4-1.fc31 fedora 1.4 M cmake-filesystem x86_64 3.14.4-1.fc31 fedora 16 k cmake-rpm-macros noarch 3.14.4-1.fc31 fedora 15 k cpp x86_64 9.1.1-1.fc31 fedora 9.8 M emacs-filesystem noarch 1:26.2-1.fc31 fedora 9.8 k gcc x86_64 9.1.1-1.fc31 fedora 23 M gcc-gfortran x86_64 9.1.1-1.fc31 fedora 11 M gdbm-libs x86_64 1:1.18-4.fc30 fedora 50 k glibc-devel x86_64 2.29.9000-19.fc31 fedora 1.0 M glibc-headers x86_64 2.29.9000-19.fc31 fedora 487 k hdf5 x86_64 1.10.5-2.fc31 fedora 2.1 M hdf5-devel x86_64 1.10.5-2.fc31 fedora 1.1 M isl x86_64 0.16.1-8.fc30 fedora 796 k jsoncpp x86_64 1.8.4-6.fc30 fedora 86 k kernel-headers x86_64 5.1.0-1.fc31 fedora 1.2 M lapack x86_64 3.8.0-11.fc30 fedora 8.5 M lapack-devel x86_64 3.8.0-11.fc30 fedora 78 k libaec x86_64 1.0.4-1.fc30 fedora 35 k libaec-devel x86_64 1.0.4-1.fc30 fedora 11 k libgfortran x86_64 9.1.1-1.fc31 fedora 705 k libgomp x86_64 9.1.1-1.fc31 fedora 223 k libmpc x86_64 1.1.0-3.fc30 fedora 56 k libquadmath x86_64 9.1.1-1.fc31 fedora 194 k libquadmath-devel x86_64 9.1.1-1.fc31 fedora 36 k libstdc++-devel x86_64 9.1.1-1.fc31 fedora 2.1 M libuv x86_64 1:1.29.0-1.fc31 fedora 135 k libxcrypt-devel x86_64 4.4.6-1.fc31 fedora 35 k openblas x86_64 0.3.6-1.fc31 fedora 29 k openblas-devel x86_64 0.3.6-1.fc31 fedora 88 k openblas-openmp x86_64 0.3.6-1.fc31 fedora 4.9 M openblas-openmp64 x86_64 0.3.6-1.fc31 fedora 4.8 M openblas-openmp64_ x86_64 0.3.6-1.fc31 fedora 4.8 M openblas-serial x86_64 0.3.6-1.fc31 fedora 4.8 M openblas-serial64 x86_64 0.3.6-1.fc31 fedora 4.7 M openblas-serial64_ x86_64 0.3.6-1.fc31 fedora 4.7 M openblas-threads x86_64 0.3.6-1.fc31 fedora 4.9 M openblas-threads64 x86_64 0.3.6-1.fc31 fedora 4.8 M openblas-threads64_ x86_64 0.3.6-1.fc31 fedora 4.8 M python-pip-wheel noarch 19.1-1.fc31 fedora 1.1 M python-setuptools-wheel noarch 41.0.1-1.fc31 fedora 279 k python3 x86_64 3.7.3-3.fc31 fedora 38 k python3-libs x86_64 3.7.3-3.fc31 fedora 7.9 M rhash x86_64 1.3.8-1.fc30 fedora 168 k sqlite-libs x86_64 3.28.0-1.fc31 fedora 563 k zlib-devel x86_64 1.2.11-15.fc30 fedora 46 k I tried reproducing the issue in an fc31 docker container by running the tests over and over with different random seeds, but I wasn't able to reproduce it at all. It does look like a lower-level Armadillo bug perhaps, but even if I wanted to figure out what was going on and submit a patch I can't make heads or tails of the actual issue. What do you think? The package builds and tests successfully on my system (and the multiple Docker images I tested), so I don't know if it's worth digging into (unless I can reproduce it, which I currently can't). Well, unfortunately, it doesn't work on either Rawhide or F30 in koji: which is where you'll have to build it... Only failing on x86_64 is suspicious though, and looks a bit familiar. This may be an OpenBLAS (or something linking to it) bug. It took a while but I determined that the CXXFLAGS used by the %cmake macro cause the problem. Specifically, the -D_FORTIFY_SOURCE option causes the segfault. In any case, since ensmallen is not actually distributing that code and since the bug lies in some lower level library, I simply rewrote the spec to not use that particular flag. (I also updated the version.) That indicates a real error: Please report a bug for it. Instead of disabling all the flags (and then possibly forgetting to re-enable them), I would skip the broken test instead by running: ./ensmallen_tests ~SmallLovaszThetaSdp Ah, good point, disabling the SmallLovaszThetaSdp test is the better way to go. I'll try to keep digging and report a bug to the right place. The spec and SRPM are updated now; let me know what you think: Thanks again for your review and help with this. :) Your spec and srpm are not in sync; be sure to use the right one when you import it.. ", "BSD 3-clause "New" or "Revised" License", "Boost Software License BSL BSD 3-clause "New" or "Revised" License", "*No copyright* BSL". 253 files have unknown license. Detailed output of licensecheck in 1706659-ensmallen/licensecheck.txt : ensmallen-devel-1.15.1-1.fc31.x86_64.rpm ensmallen-1.15.1-1.fc31.src.rpm ensmallen-devel.x86_64: W: spelling-error %description -l en_US optimizers -> optimizer, optimizes, optimize rs ensmallen-devel.x86_64: W: no-documentation ensmallen.src: W: spelling-error %description -l en_US optimizers -> optimizer, optimizes, optimize rs ensmallen.src:34: W: macro-in-comment %{_prefix} ensmallen.src:35: W: macro-in-comment %{_includedir} ensmallen.src:36: W: macro-in-comment %{_libdir} ensmallen.src:37: W: macro-in-comment %{_sysconfdir} ensmallen.src:38: W: macro-in-comment %{_datadir} ensmallen.src:11: W: mixed-use-of-spaces-and-tabs (spaces: line 1, tab: line 11) 2 packages and 0 specfiles checked; 0 errors, 9 warnings. Rpmlint (installed packages) ---------------------------- ensmallen-devel.x86_64: W: spelling-error %description -l en_US optimizers -> optimizer, optimizes, optimize rs ensmallen-devel.x86_64: W: invalid-url URL: <urlopen error [Errno -2] Name or service not known> ensmallen-devel.x86_64: W: no-documentation 1 packages and 0 specfiles checked; 0 errors, 3 warnings. Source checksums ---------------- : CHECKSUM(SHA256) this package : e597a7d488b59add432dba7e8a3911eddbbce30ab665e9e3fc0541466245997a CHECKSUM(SHA256) upstream package : e597a7d488b59add432dba7e8a3911eddbbce30ab665e9e3fc0541466245997a Requires -------- ensmallen-devel (rpmlib, GLIBC filtered): Provides -------- ensmallen-devel: ensmallen-devel ensmallen-devel(x86-64) ensmallen-static Diff spec file in url and in SRPM --------------------------------- --- review/1706659-ensmallen/srpm/ensmallen.spec 2019-06-05 23:45:39.137176086 -0400 +++ review/1706659-ensmallen/srpm-unpacked/ensmallen.spec 2019-06-04 19:19:00.000000000 -0400 @@ -29,5 +29,12 @@ %build +# Don't use the usual RPM-based CXXFLAGS because they cause a segfault in the +# tests (at a lower level than ensmallen). %cmake +# -DCMAKE_INSTALL_PREFIX:PATH=%{_prefix} \ +# -DINCLUDE_INSTALL_DIR:PATH=%{_includedir} \ +# -DLIB_INSTALL_DIR:PATH=%{_libdir} \ +# -DSYSCONF_INSTALL_DIR:PATH=%{_sysconfdir} \ +# -DSHARE_INSTALL_PREFIX:PATH=%{_datadir} . # Technically we don't need to build anything but it's a good sanity check to @@ -39,6 +46,4 @@ %check -# Disable the SmallLovaszThetaSdp test---it exposes a bug in one of ensmallen's -# dependencies. ./ensmallen_tests ~SmallLovaszThetaSdp Generated by fedora-review 0.7.2 (65d36bb) last change: 2019-04-09 Command line :/usr/bin/fedora-review -m fedora-rawhide-x86_64 -b 1706659 Buildroot used: fedora-rawhide-x86_64 Active plugins: C/C++, Generic, Shell-api Disabled plugins: Perl, R, PHP, fonts, Haskell, Python, SugarActivity, Java, Ocaml Disabled flags: EPEL6, EPEL7, DISTTAG, BATCH, EXARCH Thanks! And you're right, I did change the spec file after building the SRPM, but in this case I only changed comments. Sorry for any confusion. Does anything else need to be done by me here? And thanks again. No, it's approved already. (fedscm-admin): The Pagure repository was created at FEDORA-2019-e5d4aeaacc has been submitted as an update to Fedora 30. FEDORA-2019-b5dbfb1cb5 has been submitted as an update to Fedora 29. FEDORA-EPEL-2019-96c517757d has been submitted as an update to Fedora EPEL 7. ensmallen-1.15.1-1.fc30 has been pushed to the Fedora 30 testing repository. If problems still persist, please make note of it in this bug report. See for instructions on how to install test updates. You can provide feedback for this update here: ensmallen-1.15.1-1.el7 has been pushed to the Fedora EPEL 7 testing repository. If problems still persist, please make note of it in this bug report. See for instructions on how to install test updates. You can provide feedback for this update here: ensmallen-1.15.1-1.fc29 has been pushed to the Fedora 29 testing repository. If problems still persist, please make note of it in this bug report. See for instructions on how to install test updates. You can provide feedback for this update here: ensmallen-1.15.1-1.fc30 has been pushed to the Fedora 30 stable repository. If problems still persist, please make note of it in this bug report. ensmallen-1.15.1-1.fc29 has been pushed to the Fedora 29 stable repository. If problems still persist, please make note of it in this bug report. ensmallen-1.15.1-1.el7 has been pushed to the Fedora EPEL 7 stable repository. If problems still persist, please make note of it in this bug report. Everything's stable now; can't we close this? Oh, definitely, I just forgot to. Thank you again for your help!
https://bugzilla.redhat.com/show_bug.cgi?id=1706659
CC-MAIN-2021-10
refinedweb
2,850
53.78
Fast IO c++ Can someone give me the code for fast IO in c++? Thanks very much. printf scanf faster then that Maybe faster than velocity of light? Isn't it speed of light? XD reads integers fast long long int read_int(){ char r; bool start=false,neg=false; long long int ret=0; while(true){ r=getchar(); if((r-'0'<0 || r-'0'>9) && r!='-' && !start){ continue; } if((r-'0'<0 || r-'0'>9) && r!='-' && start){ break; } if(start)ret*=10; start=true; if(r=='-')neg=true; else ret+=r-'0'; } if(!neg) return ret; else return -ret; } I've compared reading 10^7 integers ([-10^9; 10^9]) in your way and scanf. Your: 2.45s. Scanf: 1.98s. This works a bit faster than scanf: int readInt () { bool minus = false; int result = 0; char ch; ch = getchar(); while (true) { if (ch == '-') break; if (ch >= '0' && ch <= '9') break; ch = getchar(); } if (ch == '-') minus = true; else result = ch-'0'; while (true) { ch = getchar(); if (ch < '0' || ch > '9') break; result = result*10 + (ch - '0'); } if (minus) return -result; else return result; } It`s really faster than scanf, Thanks a lot.I was able to solve the problem which has not previously held a scanf That is maybe because he was reading long long integers,it probably would have been faster if function was of int type. Btw your code helped me on one task on SPOJ so thank you! After reading your comment i wonder why dario-dsa got so many down-votes even though the question is totally right (like, everybody could understand what he was asking), is it because he is cyan ?, please, don't tell me that's the reason, because i saw the first guy proVIDec be like : (Oh, a "new bie", better troll him a little bit) btw, i know this post is 3-years old is it because he is cyan ? is it because he is cyan ? No, he surely wasn't cyan 3 years ago. how to use this method? Using buffered io very fast: char in[1<<21]; // sizeof in varied in problem char const*o; void init_in() { o = in; in[ fread(in,1,sizeof(in)-4,stdin)]=0;//set 0 at the end of buffer. } int readInt(){ unsigned u = 0, s = 0; while(*o && *o <= 32)++o; //skip whitespaces... if (*o == '-')s = ~s, ++o; else if (*o == '+')++o; // skip sign while(*o>='0' && *o<='9')u = (u<<3) + (u << 1) + (*o++ - '0'); // u * 10 = u * 8 + u * 2 :) return static_cast<int>( (u^s) + !!s) ; } how will we scan numbers using that Go to Codechef. Find a problem with a huge amount of input. Look at the fastest submit. Profit. You can use fast cin/cout by writing "ios::sync_with_stdio(false);" on the top of your programm! another approach is to read and store the entire input in a buffer before you begin processing it. Fastest functions are fread and fwrite (they are technically C functions, but should work fine in C++). Это не в тему блога, но мне нужна помощь) гляньте пож. Привет. I did some benchmarks couple of days ago while trying to find the fastest method for reading integers from stdin. I used input that had 200000 lines and two integers on each line. The results were like this (MinGW 4.8, Intel 3770K): So it's important to minimize the number of calls to standard library functions. The only type of IO that standard library does fast is transferring large chunks of data. I ended up with the last option because it's almost as fast as buffering the entire input but uses much less memory. Here's the code (it has some flaws but for contests it's good enough): static char stdinBuffer[1024]; static char* stdinDataEnd = stdinBuffer + sizeof (stdinBuffer); static const char* stdinPos = stdinDataEnd; void readAhead(size_t amount) { size_t remaining = stdinDataEnd - stdinPos; if (remaining < amount) { memmove(stdinBuffer, stdinPos, remaining); size_t sz = fread(stdinBuffer + remaining, 1, sizeof (stdinBuffer) - remaining, stdin); stdinPos = stdinBuffer; stdinDataEnd = stdinBuffer + remaining + sz; if (stdinDataEnd != stdinBuffer + sizeof (stdinBuffer)) *stdinDataEnd = 0; } } int readInt() { readAhead(16); int x = 0; bool neg = false; if (*stdinPos == '-') { ++stdinPos; neg = true; } while (*stdinPos >= '0' && *stdinPos <= '9') { x *= 10; x += *stdinPos - '0'; ++stdinPos; } return neg ? -x : x; } edit: this version requires manually skipping whitespace (stdinPos++), but it could easily be added to the function itself. You might also be interested in this article I wrote a while ago; I performed some similar I/O benchmarks: Interesting results! A minor detail: readInt has a bug. It will not work with the min value of int, e.g. –2,147,483,648 for 4-byte ints. Since 2,147,483,647 is the maxvalue of int, x will be 2,147,483,640 + 8, which is not 2,147,483,648 (since that cannot be represented in an int) and thus -x will not become -2,147,483,648 in the result. If you instead use x -= *stdinPos — '0'; and return neg ? x : -x; it should work. It is actually correct. 2,147,483,640 + 8 overflows and becomes -2,147,483,648. When this is multiplied by -1 it becomes -2,147,483,648 again. 2,147,483,640 + 8 overflows and becomes -2,147,483,648. 2,147,483,640 + 8 overflows and becomes -2,147,483,648. It doesn't On most machines it works just fine. I have used mod 2^32 overflow in many competitions and it hasn't failed me once. On most machines it works just fine. On most machines it works just fine. It doesn't depend on the machine, it depends on compiler, your code and memory layout, moon phase etc. I have used mod 2^32 overflow in many competitions and it hasn't failed me once. I have used mod 2^32 overflow in many competitions and it hasn't failed me once. You got lucky. You can search Codeforces for 'undefined behavior', 'integer overflow' etc. (But then again, the chances of getting both bad optimization and MIN_INT edge case are minuscule.) I often, practically always implement, for example, the Rabin-Karp algorithm (and other rolling hash algorithms) using modulo 2^64 and long longs (even on 32-bit machines!). I simply let all the multiplications / additions overflow. ... Never had a single issue. Lucky? I don't think so. Well, you may suppose that this will always work, but I suggest you to meditate on this code and its output a bit: What does that have to do with anything? This is clearly a compiler bug. I know, I know, undefined behavior, this program could set the computer on fire and that would be just fine according to the C++ standard. However, on MOST architectures, multiplying/adding two numbers about which the compiler cannot know anything (they have something to do with user input, for example), works just fine almost always, because the compiler doesn't bother "optimizing" it. In this case, you're multiplying numbers about which the compiler knows about and it's a lot easier for the compiler to detect undefined behavior and go haywire (for example, by setting the computer on fire like in your example) Slightly irrelevant, but since the topic was brought up: I've never seen contests that give contestant choice to perform IO using either: Of course, this "ability to choose" has cons: Could this idea be used (at least) for problems that have big input/output files? By that, I mean — in the description of a task don't add comment that goes like this: "oh yeah, input's huge, don't use slow IO, like C++ cin/cout, use printf/scanf", but provide an option to read from "input.txt" or "binary.in", and write to "output.txt" or "binary.out". fast IO from Burunduk1 Premature optimization is the root of all evil. Strict to cin/cout or scanf/printf. I was looking for a "real C++" fast Input solution. So I thought about std::cin.readsome() and tried to substitute it to fread(), but even if it's fine on my terminal it is not working on judging servers... namespace FI { const int L = 1 << 15 | 1; char buf[L], *front, *back; void nextChar(char &); template <typename T>void nextNumber(T &); } void FI::nextChar(char &c) { if(front == back) std::cin.readsome(buf, L), back = (front=buf) + std::cin.gcount(); c = !std::cin.gcount() ? (char)EOF : *front++; } template<typename T>void FI::nextNumber(T &x) { char c; int f = 1; for(nextChar(c); c>'9'||c<'0'; nextChar(c)) if(c == '-') f = -1; for(x=0; c>='0'&& c<='9'; nextChar(c)) x = x*10+c-'0'; x *= f; }
http://codeforces.com/blog/entry/8080
CC-MAIN-2020-40
refinedweb
1,450
71.95
The weakref module lets you refer to an object without preventing it from being garbage collected. Module: weakref Purpose: Refer to an “expensive” object, but allow it to be garbage collected if there are no other non-weak references. Python Version: Since 2.1 Description:, None is returned. $ python weakref_ref.py obj: <__main__.ExpensiveObject object at 0x68df0> ref: <weakref at 0x65b10; to 'ExpensiveObject' at 0x68df0> r(): <__main__.ExpensiveObject object at 0x68df0> deleting obj (Deleting <__main__.ExpensiveObject object at 0x68df0>) r(): None Reference Callbacks: as an argument, after the reference is “dead” and no longer refers to the original object. This lets you remove the weak reference object from a cache, for example. $ python weakref_ref_callback.py obj: <__main__.ExpensiveObject object at 0x69e50> ref: <weakref at 0x65ba0; to 'ExpensiveObject' at 0x69e50> r(): <__main__.ExpensiveObject object at 0x69e50> deleting obj callback( <weakref at 0x65ba0; dead> ) (Deleting <__main__.ExpensiveObject object at 0x69e50>) r(): None Proxies:6e2d0>) via proxy: Traceback (most recent call last): File "/Users/dhellmann/Documents/PyMOTW/in_progress/weakref/weakref_proxy.py", line 27, in <module> print 'via proxy:', p.name ReferenceError: weakly-referenced object no longer exists Cyclic References: One use for weak references is to allow cyclic references without preventing garbage collection. This example illustrates the difference between using regular objects and proxies when a graph includes a cycle. First we set up the gc module to help us debug the leak. The DEBUG_LEAK flag causes it to print information about objects which cannot be seen other than through the reference the garbage collector has to them. import gc from pprint import pprint import weakref gc.set_debug(gc.DEBUG_LEAK) def collect_and_show_garbage(): "Show what garbage is present." print 'Unreachable:', gc.collect() print 'Garbage:', pprint(gc.garbage) Next a utility function to exercise the graph class by creating a cycle and then removing various references. a naive Graph class that accepts any object given to it as the “next” node in the sequence. For the sake of brevity, this Graph supports a single outgoing reference from each node, which results in very boring graphs but makes it easy to recreate cycles.) If we run demo() with the Graph class like this: print 'WITHOUT PROXY' print demo(Graph) We get output like: WITHOUT PROXY Set up graph: one.set_next(two (<class '__main__.Graph'>)) two.set_next(three (<class '__main__.Graph'>)) three.set_next(one->two->three (<class '__main__.Graph'>)) Graphs: one->two->three->one two->three->one->two three->one->two->three Unreachable: 0 Garbage:[] After 2 references removed: one->two->three->one Unreachable: 0 Garbage:[] Removing last reference: gc: uncollectable <Graph 0x766270> gc: uncollectable <Graph 0x7669b0> gc: uncollectable <Graph 0x7669d0> gc: uncollectable <dict 0x751810> gc: uncollectable <dict 0x751390> gc: uncollectable <dict 0x751ae0> Unreachable: 6 Garbage:[Graph(one), Graph(two), Graph(three), {'name': 'one', 'other': Graph(two)}, {'name': 'two', 'other': Graph(three)}, {'name': 'three', 'other': Graph(one)}] Notice that: print print 'BREAKING CYCLE AND CLEARING GARBAGE' print gc.garbage[0].set_next(None) while gc.garbage: del gc.garbage[0] collect_and_show_garbage() Giving us: BREAKING CYCLE AND CLEARING GARBAGE one.set_next(None (<type 'NoneType'>)) (Deleting two) two.set_next(None (<type 'NoneType'>)) (Deleting three) three.set_next(None (<type 'NoneType'>)) (Deleting one) one.set_next(None (<type 'NoneType'>)) Unreachable: 0 Garbage:[] And now let’s define a more intelligent WeakGraph class that knows not to create cycles using regular references, but to use a weakref.ref when a cycle is detected. print print 'WITH PROXY' print class WeakGraph(Graph): def set_next(self, other): if other is not None: # See if we should replace the reference # to other with a weakref. if self in other.all_nodes(): other = weakref.proxy(other) super(WeakGraph, self).set_next(other) return When we run demo() using WeakGraph, we see much better memory behavior: WITH PROXY Set up graph: one.set_next(two (<class '__main__.WeakGraph'>)) two.set_next(three (<class '__main__.WeakGraph'>)) three.set_next(one->two->three (<type 'weakproxy'>)) Graphs: one->two->three two->three->one->two three->one->two->three Unreachable: 0 Garbage:[] After 2 references removed: one->two->three Unreachable: 0 Garbage:[] Removing last reference: (Deleting one) one.set_next(None (<type 'NoneType'>)) (Deleting two) two.set_next(None (<type 'NoneType'>)) (Deleting three) three.set_next(None (<type 'NoneType'>)) Unreachable: 0 Garbage:[] Caching Objects:). References: PEP 0205, Weak References Python Module of the Week Home Download Sample Code Technorati Tags: python, PyMOTW I appreciate the many posts that OReilly bloggers post, but I still don't understand why their blog entries, in their entirety show up in the rss reader application. Don't they have the technology there to do a *summary* or even the first n characters of the thing? Sheesh, in Google Reader I have to scroll like two miles sometimes to get through to the next entry. Hi, Jeremy,.
http://www.oreillynet.com/onlamp/blog/2008/01/pymotw_weakref.html
crawl-002
refinedweb
781
58.38
I heard Saul Griffith say recently that if you covered all the car parks in the USA with solar panels you would supply way more than the national energy requirements (I can’t find the actual reference, but just go and watch his talks and read everything he’s written). I claimed this might translate to the UK. But does it? The solar part is easy enough. If we electrify everything and want to remove carbon based generation, we need to build 300TWh of renewables. For the sake of argument let’s do it all in solar (yes, i know, but ignore clouds and nights for now. It’s a spherical cow). Now according to CAT to generate 800kWh (per year) we’d need ~1kW of panels, which might be 8m², or 125kWh/m². so 300TWh / 125kWh = 24*10⁸ m², or 2400 km² Right. Does the UK have 2400 km² of parking? Turns out that openstreetmap can give a (probably wrong) answer. Below I present a cleaned up route I hacked out to get there. The following politely elides the many, many detours and dead ends along the way. . e/bin/activate pip install geopandas descartes ipykernel matplotlib python -m ipykernel install --user --name=e # get the great-britain-latest.osm.bpf file from brew install osmosis osmosis --read-pbf great-britain-latest.osm.pbf --tf accept-ways amenity=parking --tf reject-relations --used-node --way-key-value keyValueList="amenity.parking" --write-xml gb-parking.osm npm install -g osmtogeojson osmtogeojson gb-parking.osm > gb-parking.json jupyter notebook And then in the notebook: import matplotlib p = geopandas.read_file("gb-parking.json") p = p[p['id'].str[:4] == 'way/'] # remove stuff we don't need And wait another while for my poor laptop to warm the room. No one ever accused python of being fast. Now, check we have something. This should draw all the parking areas in the UK. So something, but it’s a bit hard to see so let’s try plotting where they are with big blobs p['centroids'] = p.centroid p = p.set_geometry('centroids') p.plot(figsize=(8,16)) p = p.set_geometry('borders') # reset geometry Now the geometry we have doesn’t give us the correct units (we want m²) so change to something else, then add all the areas up and convert to km² sum(cart.area) / 10**6 gives Tada! 🎉. So apparently no! At least from the crowdsourced OSM data. We’d have to use more than just car parks 😢.
https://tech.labs.oliverwyman.com/blog/2019/10/28/uk-parking-areas/
CC-MAIN-2020-05
refinedweb
417
68.57
You can subscribe to this list here. Showing 1 results of 1 Hi Jonathan, Le lundi 11 novembre 2013 à 09:30 +1030, Jonathan Woithe a écrit : > I think it would be difficult to come up with a reliable way to import > devices settings into a totally different interface. Even if both devices > have matrices one can't easily determine in an automated way whether they > are compatible. Some even have multiple matrix mixers, and choosing the > mapping would be hit and miss. And some use matrix controls for controls > which are not a traditional matrix fader-based mixer. > > My view is that at least initially we shouldn't permit the restoration of > mixer data from one device type to another. > Yes; as I tried to do something for RME, I realized that importing from a device to another one is not so evident. Probably, importing from an EAP device to another EAP one could be quite easily feasible because they are based on the same chip and so share a lot of settings. But for devices with different hardware (even a Dice II to a DICE EAP, for instance), it will be very difficult and unsure. > > This is an "extension"; not sure I will implement it soon :-) > > Understood - I think that it's definitely not something we need to concern > ourselves with at this stage. To support this we'd have to come up with > some way of tagging controls to help the system identify compatible > controls. > Yes, and it was part of my previous questions; the chosen names for the tags should be sufficiently clear (and a little bit "unique") when such import functions could be introduced. Typically, I introduced some tag names specific to the RME device, but it is of course just a draft: feel absolutely free to introduce different tag names. Note that I tried not to obviate import feature in the functions at lowest level; but of course, I cannot guarantee anything. By the way, care when you will test for RME and a have a detailed look to the file produced by ffado-mixer saving first :-). Possibly, since I disabled the Open/Save_as functionality by default, you won't have access to it at all before introducing your own corrections ! :-) > Regards > jonathan Regards, Phil -- Philippe Carriere <la-page-web-of-phil.contact@...>
http://sourceforge.net/p/ffado/mailman/ffado-devel/?viewmonth=201311&viewday=11
CC-MAIN-2015-35
refinedweb
389
57.5
It's easy to use the ADS1115 and ADS1015 ADC with CircuitPython and the Adafruit CircuitPython ADS1x15 module. This module allows you to easily write Python code that reads the analog input values. You can use this ADC with any CircuitPython microcontroller board or with a computer that has GPIO and Python thanks to Adafruit_Blinka, our CircuitPython-for-Python compatibility library. First wire up the ADC to your board exactly as shown on the previous pages for Arduino using an I2C interface. Here's an example of wiring a Feather M0 to the ADS1115 with I2C: Since there's dozens of Linux computers/boards you can use we will show wiring for Raspberry Pi. For other platforms, please visit the guide for CircuitPython on Linux to see whether your platform is supported. Here's the Raspberry Pi wired to the ADS1015 with I2C: Next you'll need to install the Adafruit CircuitPython ADS1x15_ads1x15 - adafruit_bus_device You can also download the adafruit_ads1x15 folder from its releases page on Github. Before continuing make sure your board's lib folder or root filesystem has the adafruit_ads1x15 and adafruit_bus_device files and folders copied over. Next connect to the board's serial REPL so you are at the CircuitPython >>> prompt.-ads1x15 If your default Python is version 3 you may need to run 'pip' instead. Just make sure you aren't trying to use CircuitPython on Python 2.x, it isn't supported! To demonstrate the usage of the ADC we will initialize it and read the ADC channel values interactively using the REPL. First run the following code to import the necessary modules and initialize the I2C bus: import board import busio i2c = busio.I2C(board.SCL, board.SDA) import board import busio i2c = busio.I2C(board.SCL, board.SDA) Next, import the module for the board you are using. For the ADS1015, use: OR, for the ADS1115, use: Note that we are renaming each import to ADS for convenience. The final import needed is for the ADS1x15 library's version of AnalogIn: from adafruit_ads1x15.analog_in import AnalogIn from adafruit_ads1x15.analog_in import AnalogIn which provides behavior similar to the core AnalogIn library, but is specific to the ADS1x15 ADC's. OK, now we can actually create the ADC object. For the ADS1015, use: OR, for the ADS1115, use: Now let's see how to get values from the board. You can use these boards in either single ended or differential mode. The usage for the two modes are slightly different, so we'll go over them separately. For single ended mode we use AnalogIn to create the analog input channel, providing the ADC object and the pin to which the signal is attached. Here, we use pin 0: To set up additional channels, use the same syntax but provide a different pin. Now you can read the raw value and voltage of the channel using either the the value or voltage property. For differential mode, you provide two pins when setting up the ADC channel. The reading will be the difference between the two. Here, we use pin 0 and 1: You can create more channels by doing this again with different pins. However, note that not all pin combinations are possible. See the datasheets for details. Once the channel is created, getting the readings is the same as before: Both the ADS1015 and the ADS1115 have a Programmable Gain (PGA) that you can set to amplify the incoming signal before it reaches the ADC. The available settings and associated Full Scale (FS) voltage range are shown in Table 3 of the datasheet. You set the gain to one of the values using the gain property, like this: Note that setting gain will affect the raw ADC value but not the voltage (expect for variance due to noise). For example: >>> ads.gain 1 >>> chan.value, chan.voltage (84, 0.168082) >>> ads.gain = 16 >>> ads.gain 16 >>> chan.value, chan.voltage (1335, 0.167081) >>> >>> ads.gain 1 >>> chan.value, chan.voltage (84, 0.168082) >>> ads.gain = 16 >>> ads.gain 16 >>> chan.value, chan.voltage (1335, 0.167081) >>> The value changed from 84 to 1335, which is pretty close to 84 x 16 = 1344. However, the voltage returned in both cases is still the actual input voltage of ~0.168 V. The above examples cover the basic setup and usage using default settings. For more details, see the documentation.
https://learn.adafruit.com/adafruit-4-channel-adc-breakouts/python-circuitpython
CC-MAIN-2020-16
refinedweb
732
66.03
Forum:Huff the Game Namespace From Uncyclopedia, the content-free encyclopedia Before you all groan "oh no, not this again", I've already read up on the issues surrounding the Game namespace and, more importantly, the arguments for keeping it around. I don't intend for this to be another forum where extreme solutions are thrown back and forth. Instead, this is going to be an analysis of what the Game namespace is good for, where it fails, and where it adds needless complications to the wiki. The Game namespace has its good points Take a look at Game:Zork. It totally belongs here - a parody of text-based games in the style of a text-based game. If that's not Uncyclopedian to you, you haven't looked at Category:Pages that look like the things they're about (or its partner in crime, Category:Pages that look like the things they're about (hidden)). No, it's not a good actual game, but it also doesn't need to be - it's a goddamn parody. We have similar articles in the Game namespace that I feel belong here, e.g. Game:Pick Up the Phone Booth and Aisle, another reference to an actual game taken to ridiculous extremes. I'm sure some of you Game namespace fanatics can think of more. But these all share one fact in common... "Games" don't need their own namespace These aren't "games". You can't have proper "games" on a MediaWiki wiki (more on that later). They are articles that look like the things they're about, which we already have a category for. These "games" could easily exist in mainspace without any complaint. There are few enough of them (that is, properly executed games) that they don't need their own namespace, less their own Main Page. There's been a lot of discussion about how a game ought to be judged, and I'm here to say they ought to be judged as articles. Does a game serve a satirical purpose? Great! It's an article (that looks like the thing it's about). Otherwise, it's VFD fodder, or a personal project that needs to be kept in userspace. Most games we have now are not funny and/or do not need to be written in the game format. People assume the Game namespace is a free pass to make a game. It's not; an unfunny game is no different than an unfunny article. If we move games to mainspace I think it'll be clearer just how many games are conceptless bunches of text. And furthermore... MediaWiki is not a game engine Is your game a standard text-based game? No? Then it's based on a fucking hack. Specifically, the hack in question stems from - of all things - how our forums work. I actually investigated this junk and managed to create a few new templates to set variables. But the important thing is our more complicated games are based entirely on our forum extension. If that forum extension ever changes how it works, all the advanced games are totally screwed. Think about that for a minute. If the forum extension manages to fix itself up, our Game namespace is toast. That includes at least one featured Game. That is not a gamble I want to see us taking. No other namespace - heck, no article - is dependent on a MediaWiki extension hack to operate. CSS hacks, yes, JS hacks, yes, but something server-side, something dependent on Wikia, something that might be declared a "bug" and expunged later? Certainly not. At this point you might be tempted to suggest that, should this ever happen, we'll tell Wikia to keep the outdated forum extension indefinitely. This is an incredibly stupid idea. Refusing an update to an extension just because it has a bug you like to exploit isn't a good enough reason. So what do we do? There aren't any reasons to keep games games that have a reason to be presented as games, just as there are only a few articles that have a reason to be presented as something they aren't. There isn't any need for new quality standards. Something should only be in the game format if it has a reason to be presented as a game.. – Sir Skullthumper, MD (criticize • writings • SU&W) 17:51 Apr 04, 2011 - The game hack uses DPL forum? Nevermind how completely idiotic that is, it should be possible to swap that out for regular old DPL, which is a lot more... not likely to change so much, since it's not for forums, but for general things... like that. Hells, though, if we lose either of those altogether, a good chunk of the Uncyclopedia namespace will be kind of completely screwed, anyhow, as well as some actual projects that also use DPL and forum functionality, and don't think there wouldn't be more indirect impacts from losing maintenance, review, votes, etc, on the rest of Uncyclopedia. So... eh. - That said, the hacks are in templates, so it should be simple enough to change them to something less stupid when it breaks. - More of an issue would be the sheer number of subpages the non-compressed ones have... and how many of those tend to be completely pointless. I suspect that why the Game namespace was established may have had something to do with that, though, keep all the subpages out of Special:Random and whatnot... this would, however, be solved by compressing them, killing most of them, and/or adding some magic word to the subpages to hide them, assuming one exists, which one probably does, since that'd be a little too useful to have gone unconsidered. Anyone know it? ~ 18:53, 4 April 2011 - Ok, here are the games I think should be kept: - Zork (1 - 3) - Grueslayer (A lot of effort clearly went into this) - Abyss (Same as Above) - Oliver Twist (featured) - Game:Pick Up the Phone Booth and Aisle (featured) - Game:Alone in the dark (featured, Top 10 10) - Game:TheBlueScreenOfDeath (Is actually Funny) - Game:Pixel Hunt (Flash, kept on VFD) - Game:Oregon Trail (Has it's own article) - Game:2012 II (Work in progress, I've been working on it nearly none-stop.) What do you think?? - LOL vandalz 18:01, April 4, 2011 (UTC) - Just because effort went into something doesn't mean it belongs here or is even any good. I could point to a few things in particular, but my lawyers have advised against that. ~ 18:53, 4 April 2011 Sometimes I just wish that people would stop caring about things that shouldn't really be cared About. - LOL vandalz 18:55, April 4, 2011 (UTC) Not doing much of anything right now Hi, my name is Something "Socky" Sockadelic. You might remember me as being an "editor" of the "wiki," or you might not. Despite what you may think I've been an excellent doer of things lately. I'm going to break my streak though, as I always do when I see something so impossibly stupid I can't help but comment. Now, while this forum contains a great deal of stuff I could be commenting on, I'm gonna be lazy and concentrate on this: Sorry to break this to the world, but if that forum extension changes how it works, or stops working altogether, the important thing to me would be how all the forums will be:30, 4 April 2011 - We could always get rid of the forum namespace. It's pretty useless. MegaPleb • Dexter111344 • Complain here 21:36, April 4, 2011 (UTC) Let's but this: “GAMEY HUFF!! LET'S A HUFF 4000+ PAGES! YAAAAEAEAEAEEEEEAASAAAAAEEAAASAAAAAAAAAEAAAAAY!1” - LOL vandalz 21:47, April 4, 2011 (UTC) I like the game namespace Even though it really isn't as popular as it was a while ago, I think it still has promises. I like:49, April 4, 2011 (UTC) I have to agree with Lollipop. - LOL vandalz 20:03, April 5, 2011 (UTC) - Me too. But I have my schoolwork and part-time job to do, so I can't contribute in here. I even have no time for zh-tw. - (We find out a way to play dependent background music continuously)--Sunny周 18:21, April 8, 2011 (UTC) "A lot of effort clearly went into [Grueslayer]" Thank you for deciding to NOT huff Grueslayer in the event the entire Game namespace gets snorted up ChiefjusticeDS' nose. You're right about the effort thing except for the past year and a half. I REALLY want to start working on it again and I have outlined several plans on the game's talk page, but I don't have the Internet at my house and coordinating the revival of Grueslayer with all the other shit I do during WiFi visits would be...not very hard, actually. What I need to do is delete most of the actual game, re-do everything that's left, then get a team of Implementors and start again. I could start looking through the game tomorrow or on Friday, it depends on my schedule and my amount of laziness. If anyone wants to help, i'm usually on the #uncycloepdia IRC channel a whole lot, contact me there or something. As for the actual topic: deleting the namespace would free up a whole lot of space and create less cruft, but it would also close the doors on people who could make potentially awesome games. Me and Emmzee created Grueslayer on early 2007. The game consisted of a shitty fucking ONE-LINER until April, when someone reminded us about it and the rest is history. If that had happened in early 2010, instant huff. I personally think we should keep the game namespace open, but with stricter moderation. PuoppyOnTheRadio has the right:44, April 6, 2011 (UTC) - Exactly, whqat POTR said. Games like "Alone in the Dark" and "Pixel Hunt"...this isn't an online gaming website...it should just be text games, and even then, guidelines. --:53, April 6, 2011 (UTC) I'm, sorry to say, But we can't use Flash. The website which owns us (Wikia) Strictly forbids it. Also, the majority I've seen on English and Korean Uncyclopedia range from Average to Broken, and maybe even Unplayable. Sorry about that, But Flash SERIOUSLY Doesn't work. - LOL vandalz 18:28, April 8, 2011 (UTC) - Basically, the majority is done by me. I sometimes hear that the uncyclopedians in here can't play my games. Recently, I think I know the reason. - There are at least 3 of my games are based on fullwidth forms Chinese characters. Therefore foreign computer probably can't display my games so well. (Maybe Korean doesn't have this problem. I have rent computer there when traveling and it seems okay.) - My UnTunes Hero is unplayable for some keyboards and computer. I don't know. I can't control that game with my new computer either. But I have no time to remake it. - So, maybe it's my games giving you a feeling that Flash Games are not suitable for Uncycloepdia. It's my mistake but Flash Game IS a one of good directions for Game Project to develop.--Sunny周 18:47, April 8, 2011 (UTC) - I just played grueslayer for the first time, made me laugh so hard my roomy's ran in to see if I'd finally gone mad! How do I edit? What can I do to contribute ? :) Lock'd And Loaded ~CUN ~ (Shoot!) 22:24, April 6, 2011 (UTC) HUFF ME - Support. Mister Victim (talk) 07:13, April 9, 2011 (UTC) -:46, 9 April 2011 For inspiration's sake I think there's actually a pretty good reason for keeping the game namespace. Sure, a lot of the crappiest stuff goes into that space, but there is an advantage of keeping the namespace- inspiration! As Mr Skullthumper himself said, games like Zork are actually parodies of real-life games which are written in the style of their subject. Indeed, most of the acceptable content in the namespace has to have a similar idea behind it. And that's where the inspiration comes in. Everybody knows that the Game namespace is still part of Uncyclopedia, which is a parody website. So I think it could be an excellent breeding ground for some bright minds to present some of their work in such a manner. Maybe making fun of things by making a funny game about them is precisely the kind of work where they will shine. I know a lot of the Game stuff has ended up on VFD. But the namespace could inspire a lot of people to come up with innovative ways of making us laugh, couldn't it? --Scofield 18:16, April 9, 2011 (UTC) - Correct. We simply need a way to keep shitty cruft from popping up less often than it does now, and ensure quality control throughout the namespace. -:38, April 9, 2011 (UTC) - I Agree with Trar. I Even made an Article about it. (・A・) - LOL vandalz 18:51, April 9, 2011 (UTC) Hey, I remember something like this And Skullthumper, twit that he is, has put it far better that I ever could. Remove this worthless namespace post haste and replace it with something better. Like pictures of kittens. Or video of dudes getting hit in the nuts. Or a video of Skullthumper getting hit in the nuts with a kitten. Apr - For Skullthumper getting hit in the nuts with a kitten. -:13, Apr 11 My Opinion Of the text games on Uncyclopedia, very few are any good. Most are just stupid IP creations that go around in circles. I notice this pattern with most games: 1) Some anonymous user gets bored and decides to make a game 2) Several days of irritating users with the constant creation of subpages 3) People play it and realise that it isn't really any good 5) Gets votes for deletion with a huge consensus 6) Zombiebaron deletes all the pages and subpages which also clogges up the recent changes log 7) Back to step 1 I admit SOME games are reasonable, and should be kept and protected. But largely the games are all stupid, not fun and a waste of time. If a user is really desparate to make a game, thats what their userspace is for. - 10:33, April 11, 2011 (UTC) - This is mostly accurate for modern games, created in this day and age. Back in MY day we had shit like this EVERYWHERE. It wasn't ALL bad, really, just...mostly bad. Mind you that was the age of Famine and Nintendorulez. I)}" > 13:14, April 11, 2011 (UTC) Let's get back to the important stuff Like this. That's kind of important. MegaPleb • Dexter111344 • Complain here 08:30, April 12, 2011 (UTC) - It's only a matter of time before the Empire changes the forum extension on us. The forums, the advanced games lauded by so many - destroyed within seconds. We need to either have a backup plan or make our games simpler like:45, April 12, 2011 (UTC) Lyrithya had some suggestion on that, I think.... --Scofield 21:05, April 12, 2011 (UTC) Fresh, New Idea! Stop worrying about how to delete half the website, and figure out a way to make this site Fresh. And possibly New. (See what I did there?) Or am I the only one who thinks we spend too much time worrying about worthless, idle drivel? Be funny, you dicks!:33, April 12, 2011 (UTC) - Yes, we need lots of buzzwords and buzzphrases! - &c. Mister Victim (talk) 16:45, April 13, 2011 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Huff_the_Game_Namespace
CC-MAIN-2015-32
refinedweb
2,638
71.55
Dependency Management with the Swift Package ManagerBy Chris Ward Swift’s journey into a fully fledged cross-platform language continues as it’s contributors focus on the version 3 release. Any language that wants a long-term existence needs a way of adding functionality that doesn’t require the core developers to add every request. This is typically in the form of package or dependency management, and the Swift Package Manager (SPM) will be one of the many features added to Swift 3. But something not being officially released has never stopped inquisitive developers experimenting in the past. In this article I will introduce the SPM, show you how to install it and existing packages, and how to create your own. More from this author Note: You should be able to follow the steps I will present on an OS X or Linux platform. Living on the Edge with the Swift Package Manager The SPM is not part of the current official Swift release and you will need to install trunk development snapshots of V3 to use it. This is not too difficult and you can find instructions here. As installing this snapshot could break your setup for production app development, I recommend you install swiftenv, which lets you switch between installed versions of Swift and is super useful for experimenting with Swift versions. Once you have installed swiftenv and you have a trunk development release active, check you have the package manager by running: swift build --version You will hopefully see something like Apple Swift Package Manager 0.1. If you want to use XCode instead then it will manage different Swift versions for you. Open XCode -> Preferences and set the Toolchains under the Components section. Note: Swift is undergoing rapid development and things change all the time, breaking projects. For this tutorial I used the February 8th snapshot for greater compatibility. Now you can see why I mentioned how to switch versions and you will do it a lot until version 3. Using Existing Packages Many existing Swift packages are available, but currently no central listings service like NPM exists, so finding them can be hard. One option is The IBM Swift Package Catalog, but it contains a mixture of CocoaPods, Carthage and Swift packages. I expect there will be an ‘official’ list sometime in the future. To add an existing package as a dependency to a project, create a file named Package.swift and add the following: import PackageDescription let package = Package( name: "SitePointSPM", dependencies: [] ) This is the basic structure of a package file where you set a name, which is a package itself and an empty array. To add a dependency, change dependencies[] to: ... dependencies: [ .Package(url: "", majorVersion: 0, minor: 4), ] ... This downloads a dependency from a url (generally github) with a specified version using semantic versioning. Create a Sources folder and in it, create Main.swift. Add the following code: import Curassow import Inquiline serve { request in return Response(.Ok, contentType: "text/plain", body: "Hello World") } This code uses the Curassow and Inquiline packages to configure and start a basic http server. Execute swift build --configuration release to run this simple app. Notice that when you build for the first time the Swift build process will download the dependencies declared in your package file, plus the dependencies that it declared. Creating Your Own Package You construct a Swift package in the same way as a ‘normal’ application. But a package generally consists of and includes source files located in a Sources directory. The sample application provided by Apple is a great example to learn the potential. In this example, the PlayingCard package defines a PlayingCard. Then the DeckofPlayingCards package imports the PlayingCard package and uses it’s methods and objects to create a randomly shuffled Deck of PlayingCards. Here Be Helpful Dragons Following this introduction you likely hit problems installing and using the Swift package manager. This shows it’s certainly not ready for production applications. But, it’s simple to use and create packages for and whether you decide to wait for a stable Swift 3, or jump right in and update your code every time something breaks, the Swift package manager is another puzzle piece in making Swift a true full stack language in the next 12 months. What are your thoughts?
https://www.sitepoint.com/introducing-the-swift-package-manager/
CC-MAIN-2017-26
refinedweb
719
52.09
Testing Introduction Testing. Types of tests.(). The ‘meteor test’ commandflag to be true. - Starts up the test driver package (see.. As we’ll see this is ideal for unit tests and simple integration tests.flag. Driver packages: - Web reporters: Meteor applications that display a special test reporting web UI that you can view the test results in. - Console reporters: These run completely on the command-line and are primary used for automated testing like continuous integration. Recommended: Mocha. - practicalmeteor:mocha Runs client and server package or app tests and displays all results in a browser. Use spacejam for command line / CI support. - meteortesting:mocha Runs client and/or server package or app tests and reports all results in the server console. Supports various browsers for running client tests, including PhantomJS, Selenium ChromeDriver, and Electron.. Here’s how we can add the practicalmeteor:mocha package to our app: Test Files Test files themselves (for example a file named todos-item.test.js or routing.app-specs.coffee) can register themselves to be run by the test driver in the usual way for that testing library. For Mocha, that’s by using describe and it: Note that arrow function use with Mocha is discouraged. Test data: This technique will only work on the server. If you need to reset the database from a client test, you can use a method to do so:. Generating: To use the factory in a test, we simply call Factory.create: Mocking. In a Mocha test, it makes sense to use stub-collections in a beforeEach/ afterEach block. Unit testing. A simple Blaze unit test In the Todos example app, thanks to the fact that we’ve split our User Interface into smart and reusable components, it’s natural to want to unit test some of our reusable components (we’ll see below how to integration test our smart components).: A simple example of a reusable component to test is the Todos_item template. Here’s what a unit test looks like (you can see some others in the app repository). imports/ui/components/client/todos-item.tests.js: Of particular interest in this test is the following: Importing!) Stubbing: then you’ll need to import Todos both in that file and in the test: Creating data We can use the Factory package’s .build() API to create a test document without inserting it into any collection. As we’ve been careful not to call out to any collections directly in the reusable component, we can pass the built todo document directly into the template. A simple React unit test. Running unit tests To run the tests that our app defines, we run our app: Usually, while developing an application, it makes sense to run meteor test on a second port (say 3100), while also running your main application in a separate process: Then you can open two browser windows to see the app in action while also ensuring that you don’t break any tests as you make changes. Isolation techniques In the unit tests above we saw a very limited example of how to isolate a module from the larger app. This is critical for proper unit testing. Some other utilities and techniques include: The velocity:meteor-stubspackage, which creates simple stubs for most Meteor core objects. Alternatively, you can also use tools like Sinon to stub things directly, as we’ll see for example in our simple integration test. The hwillson:stub-collectionspackage we mentioned above. There’s a lot of scope for better isolation and testing utilities. Testing publications Using the johanbrook:publication-collector package, you’re able to test individual publication’s output without needing to create a traditional subscription: Note that user documents – ones that you would normally query with Meteor.users.find() – will be available as the key users on the dictionary passed from a PublicationCollector.collect() call. See the tests in the package for more details. Integration testing. Simple integration test: Of particular interest in this test is the following: Importing As we’ll run this test in the same way that we did our unit test, we need to import the relevant modules under test in the same way that we did in the unit test. Stubbing. Creating data. Full-app integration test In the Todos example application, we have a integration test which ensures that we see the full contents of a list when we route to it, which demonstrates a few techniques of integration tests. imports/startup/client/routes.app-test.js: Of note here: Before running, each test sets up the data it needs using the generateDatahelper (see the section on creating integration test data for more detail) then goes to the homepage. Although Flow Router doesn’t take a done callback, we can use Tracker.afterFlushto wait for all its reactive consequences to occur. Here we wrote a little utility (which could be abstracted into a general package) to wait for all the subscriptions which are created by the route change (the todos.inListsubscription in this case) to become ready before checking their data. Running full-app tests To run the full-app tests in our application, we run:. Creating data: You can install version 4 from nodejs.org or version 5 with brew install node. Then we can install the Chimp tool globally using:: Chimp will now look in the tests/ directory (otherwise ignored by the Meteor tool) for files in which you define acceptance tests. In the Todos example app, we define a simple test that ensures we can click the “create list” button. Running acceptance tests To run acceptance tests, we simply need to start our Meteor app as usual, and point Chimp at it. In one terminal, we can do: In another:. Creating data Although we can run the acceptance test against our “pure” Meteor app, as we’ve done above, it often makes sense to start our meteor server with a special test driver, tmeasday:acceptance-test-driver. (You’ll need to meteor add it to your app):. Command line We’ve seen one example of running tests on the command line, using our meteor npm run chimp-test mode. We can also use a command-line driver for Mocha meteortesting:mocha to run our standard tests on the command line. Adding and using the package is straightforward: (The --once argument ensures the Meteor process stops once the test is done). We can also add that command to our package.json as a test script: Now we can run the tests with meteor npm test. CircleCI:
https://guide.meteor.com/testing.html
CC-MAIN-2018-09
refinedweb
1,096
53.51
The idea is to break this component into two separate .Net components. The first will be a proxy component which implements all of the original methods and properties from the VB6 component, and implements the same COM interface, so it can be swapped in for the old component. It won't do any work, but instead will invoke methods on the second component to do the work. The second .Net component will be a .Net assembly written in the proper .Net fashion that does all of the work that the old VB6 DLL used to do. New .Net applications can make calls directly into the second component and by-pass the COM interface. So how do we create this COM facade? The way I ended up doing it was to extract the type library from the VB6 component, generate a .Net assembly that defines the interfaces in CLR metadata, and then created a .Net assembly to override the interfaces. The steps for this are: 1. On a machine which has your VB6 COM component registered and Visual Studio 6.0, fire up the OLEView tool. 2. Find your VB6 component. I looked under "Automation Objects" for the ProgId which is the "MyComp.ClassA" you use when you do a CreateObject("MyComp.ClassA") in VB6. 3. Select "Object-View Type Information" to display the "ITypeLib Viewer". Your IDL is in the right-hand pane. 4. Save the IDL by clicking the save button. You now have a "MyComp.IDL" file. 5. Fire up the VS2008 command prompt and navigate to your IDL file. 6. Type "MIDL MyComp.IDL" to generate a type library. I needed to reorder some of the things in the IDL to get it working. You will now have a "MyComp.TLB" file. 7. Create your .Net assembly by typing "tlbimp MyComp.Tlb /out:MyCompTlbAssembly.dll". This will generate a "MyCompTlbAssembly.dll" file. This is a .Net assembly that defines your interface. Now we need to override it and implement our new code. 8. Create a new .Net Class project. I called mine "MyCompProxyNET". 9. Add a reference to your recently create "MyCompTlbAssembly.dll" by using "Project-Add Reference" and then going to the "Browse" tab and finding your dll. 10. Create a class for the first class from your VB6 component that you want to override, eg ClassA. I called my class "ClassAProxy". 11. Add an import of the System.Runtime.InteropServices, set your prog id to match the original VB6 prog id, and implement your interface. So you have something like this: using System; using System.Runtime.InteropServices; namespace MyCompProxyNET { [ProgId("MyComp.ClassA")] public class ClassAProxy : MyCompTlbAssembly.ClassA { } } 12. Now implement the interface for this class. VS2008 will generate stubs for you if you right click the interface ("MyCompTlbAssembly.ClassA" in this case) and choose "Implment Interface". Nice!! 13. Almost there, now we just have to make this COM visible. Under "project-Properties-Application-Assembly Information", check the "Make assembly COM-Visible" box, and "OK". Under "Project-Properties-Build" check the "Register for COM Interop" box. 14. Build your solution. Your .Net assembly is now the registered component for that ProgId. You can verify this by looking up the ProgId in the registry under HKEY_CLASSES_ROOT and following the Clsid to the HKEY_CLASSES_ROOT/CLSID/{your clsid}. You can see that the name of the .Net component will be displayed in the "InProcServer32" key. Previously your VB6 dll would have shown up in here. Run one of your apps that used the old VB6 component and you should get a nice "The method or operation is not implemented" error message. This error is thrown by the .Net stubs you created in step 12. Just put some real code in your stubs and away you go!
http://juststuffreally.blogspot.com/2008/05/creating-com-proxy-in-net.html
CC-MAIN-2018-05
refinedweb
627
69.89
W3C | TAG | Previous: 16 Sep | Next: 9 Oct Nearby: agenda | issues list | www-tag archive All present. Back row in photo from left: Tim Berners-Lee, Norm Walsh, Stuart Williams (co-Chair), Dan Connolly, Chris Lilley, Roy Fielding, David Orchard. Kneeling from left: Paul Cotton, Tim Bray, Ian Jacobs (Scribe). Accepted 16 Sep minutes. Accepted status of completed action items in the agenda. Editor's Note: The archived IRC logs that we used to compile these minutes are broken and incomplete due to connectivity outages at the meeting. These minutes were pieced together from the scribe's local notes as well as from information from the IRC logs. The TAG extends hearty thanks to Tim Bray (Antarctica) and Philip Mansfield (Schema Software) for hosting and providing logistical support for this meeting! TB: I'm pretty pleased by and large. I like the progress of the architecture document. I was very pleased with our intervention on the SOAP question. In an ideal world, we would go faster, but we're all busy. I think that at some point our work will slow down, in fact. SW: Who expects to be at conf: PC, TB, CL, NW, DO IJ: I have been working on TAG election schedule; terms to start 1 Feb.: Any objections to extending terms one month? [None] TB: Our next (one-day) ftf meeting 18 Nov. .Resolved: No ftf meeting during tech plenary week. See httpRange-14. TimBL presents his arguments using illustration of a car (also available as SVG, but less up-to-date). Thumbnail version: There is general support for TB's proposed conflation of principles 2 and 7. [RF doesn't like "instance" here since not really an instance (in the sense of instance of a class)] Lunch. See namespaceDocument-8. [IJ: Yes, in XAG 1.0: 2.2 Separate presentation properties using stylesheet technology/styling mechanisms.] See issue xlinkScope-23. [DC: hmmm... demo doesn't work in the use case I'm interested in:] [DC: Demo works in Galeon, which is build from Gecko.] "The only project with more than three "yes" entries in the table's eight columns is Fujitsu's XLiP, the "XLink Processor." Its XLink engine advertises support for XLink simple and extended links, including support for locator, resource, and arc elements. The engine itself isn't available, but..." Original question from Jonathan Borden:)." All present except Paul Cotton. Reference draft is 30 Aug 2002 draft. ." See namespaceDocument-8 and mixedNamespaceMeaning-13. TB: On namespacesDocument-8, we have an action item. We have an assertion we agreed to yesterday (1) namespaces docs should be there (2) should be something like RDDL. xml:langmeans in a very large number of cases, without too much effort xml:langin other ways (e.g., xslt outputting it). [Lunch. Paul Cotton rejoins meeting.] [We reviewed Norm Walsh's draft email regarding XLink, which he revised and sent to www-tag; seeXLink email.] [We reviewed Tim Bray's draft email regarding namespaces, which he revised and sent to www-tag; seeNamespaces email.] DO: I've been asked to be on the program committee. I'd like substantial TAG participation in that day. Some ideas: [There was no formal resolution, but a general sense that those already involved on the program committee should continue as they are doing.] Resolved: Cite RFC 1958 "Architectural Principles of the Internet". PC Editorial: The word "these" in the last paragraph of 1.3 ("Some of THESE principles...") has an unclear antecedent. Please fix. DC to RF: When do you expect a new RFC? Topic: Are GETs really safe, e.g., in context of micropayments? TBL Proposal: Add a security section. TBL: But not or flie://etc/host
http://www.w3.org/2002/09/24-tag-summary.html
CC-MAIN-2016-44
refinedweb
614
68.26
In 1987 Stan Winston Studios created one of the most iconic creatures to grace the silver screen- the Predator. An actor named Kevin Peter Hall played the Predator -and now twenty six years later his nephew Jamie Hall would pay tribute to Uncle Kevin and once again bring this beloved creature to life. Originally this was going to be much shorter and titled "Predator backpack and animatronic cannon" as that is what the majority of this instructable is about, but that would be a disservice. This is really a story of how a few members of a Predator fan group, known as The Hunter's Lair, came together to create what we hoped would be the best replica Predator costume ever made- an accurate replica of the costume Kevin Peter Hall wore in the first Predator film. This was an enormous collaborative effort, and while this instructable will focus primarily on the creation of the backpack and cannon it wouldn't be right to not tell the whole story and give credit to all of the extremely talented individuals involved in the creation of this costume of this wonderful film creature. A seed is planted... At the Monsterpalooza 2012 convention there was a panel that was devoted to the 25th anniversary of the first Predator film and many of the original artists from Stan Winston Studios that worked on the film were there to talk about the making of the film. Several members of The Hunter's Lair attended as did one very special individual- Jamie Hall. After speaking with Jamie, two of the Lair members by the names of Gene Emory and Damon Silva had the idea of meeting up at Monsterpalooza the following year and creating a replica suit for Jamie to wear. The goal would be to make a replica of the Predator suit Jamie's uncle wore and have Jamie wear it around the convention hall. Matt Winston of Stan Winston School of Character Arts later had the idea that it would be really cool if several members of the original film crew could again return and then suit up Jamie just as they had with his Uncle Kevin 25 years ago. Jamie was thrilled with the idea and what would be known as the "Jamie Hall Predator Suit Homage Project" was born. But first we need to back up a few years... Video of the finished backpack/cannon- Update- Jamie as the Predator takes on Wolverine in Super Power Beat Down! Step 1: Sculpting the Backpack Predator costumer Carl Toti contacted me via the Hunter's Lair, wanting to know if it was possible to add animatronics to a replica Predator backpack and cannon he was creating. I believed it could be done so we began collaborating on the project and we've been friends ever since. Carl is an extremely talented sculptor and an absolute perfectionist. He didn't just want to make a replica backpack and cannon- he wanted them to be as accurate and faithful to the original movie items as possible- which proved to be a tall order. The Predator backpack and cannon are pretty complex movie props and there really wasn't a lot of documentation available concerning how the original props were made, let alone any really good close up photos of the original props. Carl would spend nearly five years on his quest for Predator nirvana, gathering bits of information as it became available, constantly re sculpting to make it more accurate to the original. But there were always areas of his sculpt that were questionable because he could never find photos of the original backpack and cannon taken from the right angles that would show him what he needed to see. Then..... pay dirt! Fast forward a few years. Carl got a lucky break- two individuals happened to be in the right place at the right time. Art Andrews (he runs The Replica Prop Forum- better known as The RPF, as well as The Hunter's Lair and The Dented Helmet) and friend George Frangadakis were able to obtain multiple photos of the original props and supply them to Carl. The photos revealed an enormous amount of detail he hadn't seen before and while it would require an extensive overhaul of his sculpt, it would be worth the effort. When I received the backpack I couldn't believe it- it really was one of the most impressive prop replicas I had ever seen. The photos just don't do it justice. Here's how Carl created the backpack- Tools and Materials Crock pot Clay sculpting tools Foam core Home built roto-caster 1 gallon of Smooth-On Smooth-Cast 300 casting resin From Carl: "The sculpt started with a dense foam pull of the torso armor which had to be sculpted and cast first (naturally). I then constructed a foam core and hot glue core skeleton which clay sticks to very nicely. I used Chavant medium NSP clay which comes in 10 lb. blocks which I melt in my crock pot. The physical property of the clay is such that it is easy to work with when warm, and feathers nice. As it cools, it becomes harder, which makes it ideal for machine-like parts because you can actually carve lines and grooves into it, etc. I populated the sculpt with "greeblies" (a term in the prop building world for small detail items) I obtained from model tank kits, as well as having a lot of them custom printed from Scott Andrew's 3D printer to exactly match the ones I couldn't find, as seen in the reference photos. Above all, the backpack and cannon had to be screen accurate. I molded the backpack in place on my mannequin with brush-on silicone, followed by a Plasti-Paste mother mold on top of that. If that wasn't enough, I then had to figure out how to build a roto-caster large enough to cast the darn thing!" Chavant NSP is a sulphur free clay- if your clay has sulphur in it and it comes into contact with a silicone mold material the silicone mold will not cure. So once the sculpt is finished you have to mold it. For large items like this backpack a mother mold is the way to go as a large box mold would be impractical- a box type mold would use an enormous amount of silicone molding compound and it would be extremely heavy. A mother mold is a type of mold that has a silicone layer surrounded by a rigid backing (often called a jacket)- sometimes the backing is fiberglass or plaster but in this case Plasti-Paste was used as a large plaster backed mold can be really heavy. To make the mold first parting lines are created on the sculpt using card stock or thin wood pieces- this helps divide the mold into multiple sections that can be bolted together. Next silicone molding compound is brushed onto the sculpt- it begins with a thin layer followed by a heavier layer (a thickening agent is added to the silicone.) The thin layer is brushed on first so it will capture all of the tiny details in the surface of the sculpt. The thin layer will allow air that is trapped to escape in the form of bubbles- if you brush on a thick layer the air can't escape and you can end up with a lot of surface imperfections in your cast part. While the silicone is curing small chunks of silicone from old molds are stuck onto the surface, forming keys. These keys will help hold the silicone in place against the rigid backing so everything lines up just right when you go to cast resin in the mold. Once the silicone has cured a rigid backing material is applied to it. After the backing has cured additional sections of the sculpt are molded using the same process. When molding the additional sections a mold release agent (in this case Vaseline) needs to be applied to the mold parting lines as silicone will stick to itself. Once all the sections have been molded holes are drilled through the rigid backing so bolts can be used to hold all of the mold sections together. Next a hole is cut into the mold so the resin can be poured in- the backpack casting uses approximately 96 oz of casting resin. After this is done the mold can be taken apart and the sculpt is removed from the mold and the mold is ready for casting in the roto-caster. The roto-caster was needed because there is no way to slush cast it- the mold is simply too big and too heavy to try and hold in your hands- the mold ready to cast weighs over 50 pounds. The roto-caster spins the mold in a frame-within-a-frame assembly so once you pour the resin in you spin the mold and you get a nice even resin wall thickness in your finished casting. The backpack and cover for the med kit area were done using separate molds. The backpack is entirely hollow to reduce weight and allow room for electronics as well as a future screen accurate med kit. Step 2: Creating the Cannon Now Carl needed a new cannon to go with his awesome backpack- Carl had already made a cannon- but armed with his new reference material he knew it could be much better. There was one issue with the new cannon reference material- something wasn't right. One side of the cannon looked misshapen and looked like it was missing details. It clearly didn't look right and Carl ended up using his best judgement to determine how it should have looked. This turned out to be a good move. At the 25th Predator anniversary, Carl struck up a conversation with Richard Landon. Richard was one of the members of the Stan Winston Studios crew that worked on the original film. Carl had presented members of the original film crew with a trophy made from his then new cannon. Richard was really impressed by the detail and Carl asked him about the misshapen movie cannon. Richard laughed and told him a story about how they were filming the movie in the jungle and it was extremely wet at the time. As it turns out, the movie cannon was fitted with squibs that were fired off in sequence to simulate cannon fire during filming. Because it was so wet, multiple squibs happened to fire all at once- kablamm! The entire side of the cannon had blown apart! The crew spent a lot of time scouring the jungle floor for the cannon remains and having found the missing pieces, Richard got some pantyhose and superglue and pulled the pantyhose over the reconstructed cannon and coated the assembly with superglue to hold it together. A quick paint job and they were back in business. So that's why the movie cannon looked misshapen on one side. Creating the cannon- Whereas the previous cannon had been sculpted by hand, the new cannon and cannon arm would be modeled on the computer and printed using a 3D printer. Carl's friend Scott Andrews runs LightBeam 3D and was able to help him prep the model for printing and then print it in a proprietary UV cured plastic paste using his EnvisonTec Perfactory 3D printer. Scott also printed some of the small "greeblies" for the backpack sculpt. Carl then assembled the printed cannon model, smoothed the seams and molded it for resin casting using Smooth On Mold Max tin based silicone in a box mold. A box mold is a one piece mold where the item to be molded is placed in a box and silicone molding compound is poured around it. If an item is really small the box mold can be poured in one shot and the model can be cut out. In this case the cannon is a too big for this so the mold is made in two halves. A clay bed is built up around one half of the cannon and piping was run around it to form a mold lock and then silicone molding compound was then poured over the cannon. After the silicone has cured the clay is removed and a mold release is applied to the first mold half. Now silicone is applied over the second half of the mold and allowed to cure. Once the silicone has cured the mold can be split in two and the model is removed. Now resin is poured into the mold and is slush cast- the mold is moved around in multiple directions so the resin forms an even coating inside the mold. Because the sculpting, molding and casting process was much more complex for the backpack the cannon was actually finished well before the backpack.The backpack molds were being finished up around the same time that everything was coming together for the Jamie Hall Homage Suit Project. The project had been on the back burner from the previous year and no one was really sure if it was going to come together. Now it was officially a "go." This is when I received a call from Carl asking me if he was able to get the castings to me would I be able to add the animatronics in time? I think I thought about it for maybe five seconds- I said ship it over as soon as you can. This meant that once I ordered and received the necessary electronic and mechanical parts I had four nights and one day to build the animatronic mechanism and electronic control system with sound effects. What had I just gotten myself into... Step 3: Animatronics Burning the midnight oil- Having a tight deadline means you really need a solid plan so right after I got off the phone with Carl I did several sketches and had the animatronic mechanics roughly worked out so I could order materials as soon as possible. I wanted to keep the mechanics as simple as possible because there wouldn't be time to start over if something didn't work. The idea of using the head tracking cannon system that I had previously built was thrown out- it was deemed too complex and I wouldn't be at the convention to help set it up and troubleshoot it if something went wrong. I also wanted to use as many readily available parts as possible so if something broke replacement parts could be more easily obtained. After speaking with Carl and discussing our options we decided that having the cannon move through a programmed sequence would be best. The original movie cannon worked using RC control and it raised and lowered the the cannon via a cable chain/gear drive system. We didn't want to use RC control because we didn't want to have Jamie followed around the convention hall by a radio operator. This meant that the entire control system had to be self contained and it would need to be very easy to operate by Jamie while wearing the suit. The system ended up using three finger tip control buttons- one button for the cannon movement sequence and the other two buttons for different Predator sound effects. Building the cannon mechanism- Materials- Aluminum sheet, plate and angle- I used metal that I already had on hand but Online Metals will ship you whatever you need, cut to size. 6" x 1/4" Diameter stainless steel rod 1/4" ID x 3/8" OD bronze bushing- I already had this on hand but places like McMaster-Carr sell it 4 Aluminum standoffs 1 1/2 inch bore clamping hub 1 1/2 inch bore flat bearing mount 1 Servo shaft attachment 1/2" (Hitec) 1 48T 32 pitch nylon gear with 1/4" hub 1 12T 32 pitch brass gear for Hitec servo spline 1 HS-485HB servo 2 HS-645MG servos 1 1/4" Bore set screw hub 1 4-40x3/16 inch Nylon ball links (4-Pack) 1 4-40x12 inch Stainless steel threaded rod 1 Arduino Wave Shield 1 SD memory card 3 Tactile finger tip switches 1 Arduino Pro 5V 1 Arduino Pro Mini 5V 1 FTDI basic breakout 5V 1 9V battery- grocery store 4 AA alkaline batteries- grocery store 2 Ethernet jacks 2 Ethernet jack break out boards 1 Ethernet cable -came from junk box 1 Blue Luxeon LED 1 Buck Toot LED driver 1 Barrel jack extension cable 1 Barrel jack connector 1 Proto board 2" square 1 DPDT power switch- I used one from my junk box but something like this switch would work 3 TIP120 Darlington transistors 3 1K Ohm resistors 1 3.3V voltage regulator 2 Straight break away headers 2 Female headers 1 Plastic standoffs Heavy duty servo wire 1 Small speaker -came from my junk box Tools- Portable band saw- I have a DeWalt portable band saw and it's the best thing ever for quickly cutting metals Benchtop lathe with milling attachment- I have a small Taig benchtop lathe and it's probably the best buy there is for a small lathe Drill press Soldering Iron Allen wrenches Screwdrivers Drill bits 4-40 and 6-32 taps for threading holes The cannon rotation system- I began by making the mount for the servo that would sit inside the cannon and allow it to rotate right and left. I made the base plate from 1/8" thick Aluminum sheet and mounted the HS-485HB servo using 4 Aluminum standoffs. I drilled a hole in the base plate that would allow the 1/2" diameter servo extension to stick through and then I supported the extension using a 1/2" bore bearing mount. This bearing mount would take any large side loads off the servo. I also cut a slotted hole so the servo lead wires could stick through the base plate. The cannon arm- I decided to make the cannon arm from a single piece of Aluminum. I happened to have a big 1" thick Aluminum plate section on hand so I cut a large bar from it using my portable band saw. I drew lines on the Aluminum and rough cut out the notched ends and a "U" shaped pocket for the upper pivot servo (Hitec 645MG) and then finish cut them using my lathe. I located the pivot holes on each end of the arm and drilled them using a 1/4" drill with my drill press. Then I notched the side of the arm where the servo arm would be positioned and mounted the servo using four socket head screws. I had to mill out a small pocket on the underside of the arm so the servo lead would clear. I then cut a relief in the top of the arm so I could attach the resin detail plate from Carl's resin cast cannon arm. Next I made the upper pivot. The pivot bracket was made from 1" Aluminum angle. Bolted to the top of the angle is a 1/2" bore clamping hub- this is what attaches to the cannon pivot servo. Bolted to the side of the angle bracket is a 1/4" bore set screw hub. A 1/4" Stainless steel rod was cut to use as the pivot and a bronze bushing was slid over it, sandwiched in between the 1/4" hub and the cannon arm. There is a washer on the opposite side of the Aluminum bracket that keeps it rubbing against the cannon arm. When the hub set screw is tightened the pivot is fixed in place and cannot move from side to side. The Aluminum bracket was now attached to the servo using a linkage made from a pair of 4-40 swivel ball links and a short length of 4-40 stainless threaded rod. Finally I made the main pivot bracket. This was made from a 1/2" thick piece of Aluminum plate rough cut with the bandsaw and then milled using the lathe. There is a 3/8" diameter hole near the top of the bracket and a bronze bushing was pressed into the hole- the bronze bushing supports the 1/4" diameter stainless pivot rod. The main pivot bracket has another HS-645MG servo mounted to it. The servo has a 12T gear mounted to it that drives a 48T gear that is bolted to the cannon arm. The 1/4" stainless pivot rod is fitted through the hole in the left side and then slid through the bronze bushing and the hole in the right side of the cannon arm and is press fit into the hole in the 48T gear. A small Aluminum plate was then bolted to the bottom of the main pivot bracket- the entire assembly was then ready to be fit to the backpack. Step 4: Electronics and Programming Adding a brain and sound effects- Now that I had the mechanics done I needed to make it move. I used an Arduino Pro Mini to handle the inputs and servo movements and and an Arduino Pro with an Adafruit Wave Shield to handle the sound effects. There is a small board that has three transistors on it- the transistors turn on the helmet laser sight and trigger the cannon LED and sound effect. The board also has a 3.3V regulator to provide power for the helmet laser sight. The cannon LED is a bright blue Luxeon that is driven by a constant current "Buck Toot" driver. In order to avoid any servo noise issues the Arduinos are powered by a 9V battery and everything else is powered by four "AA" alkaline batteries. I could have used NiCad, NiMH or LiPo batteries but since we were on such a tight schedule I didn't want the guys to have to worry about specialized battery packs or long charge times- they could get replacement batteries in any grocery store. Control inputs would be three small finger tip tactile switches. The switches are on/off so if Jamie pushed one of them a sound would repeat itself over and over until it was turned off. Likewise the cannon would continue to move through it's programmed sequence until it was turned off. The finger tip switches were connected to the backpack using an ethernet cable that would be run down the length of Jamie's arm and the switches would sit inside the glove fingers. The electronics were mounted to a 1/8" thick Aluminum plate using plastic standoffs and 4-40 screws. The complete wiring diagram is shown and I've included the sound files- one sound is for the cannon, one is the Predator "clicking" sound and the last sound is the Predator roar. The Arduinos are programmed using a FTDI basic breakout. I wrote up a guide to programming the Arduino here- Here's the code for the Arduino Pro Mini- this controls the cannon movements: #include <Servo.h> // include the servo library Servo servo1; // creates an instance of the servo object to control a servo Servo servo2; Servo servo3; int servoPin1 = 9; // control pin for servo int servoPin2 = 8; int servoPin3 = 7; int ledPin1 = 11; // control pin for LED int ledPin2 = 12; // control pin for laser sight int soundPin1 = 10; // control pin for sound board void setup() { servo1.attach(servoPin1); // attaches the servo on pin to the servo object servo2.attach(servoPin2); servo3.attach(servoPin3); pinMode(ledPin1, OUTPUT); // sets the LED pin as output pinMode(ledPin2, OUTPUT); pinMode(soundPin1, OUTPUT); // sets the sound pin as output digitalWrite(ledPin1, LOW); // sets the LED pin LOW (turns it off) digitalWrite(ledPin2, LOW); digitalWrite(soundPin1, LOW); } void loop() { digitalWrite(ledPin2, HIGH); // sets the LED pin HIGH (turns it on) servo3.write(170); //raises cannon arm servo1.write(140); //rotates cannon upward delay(2000); servo2.write(40); //rotates cannon away from head delay(2000); servo2.write(110); //rotates cannon toward head digitalWrite(ledPin1, HIGH); digitalWrite(soundPin1, HIGH); delay(10); digitalWrite(ledPin1, LOW); digitalWrite(soundPin1, LOW); delay(4000); servo2.write(60); //rotates cannon away from head servo1.write(120); //rotates cannon upward delay(1000); digitalWrite(ledPin1, HIGH); digitalWrite(soundPin1, HIGH); delay(10); digitalWrite(ledPin1, LOW); digitalWrite(soundPin1, LOW); delay(3000); servo2.write(120); //rotates cannon toward head servo1.write(150); //rotates cannon downward delay(2000); digitalWrite(ledPin1, HIGH); digitalWrite(soundPin1, HIGH); delay(10); digitalWrite(ledPin1, LOW); digitalWrite(soundPin1, LOW); delay(1000); servo1.write(140); delay(3000); servo1.write(170); delay(500); servo2.write(90); delay(1000); servo3.write(10); digitalWrite(ledPin2, LOW); delay(5000); } Here's the code for the Wave Shield- (courtesy of Adafruit) (2, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); // pin13 LED pinMode(13, OUTPUT); // enable pull-up resistors on switch pins (analog inputs) digitalWrite(14, HIGH); digitalWrite(15, HIGH); digitalWrite(16, HIGH); digitalWrite(17, HIGH); digitalWrite(18, HIGH); digitalWrite(19, HIGH); //!"); } void loop() { //putstring("."); // uncomment this to see if the loop isnt running switch (check_switches()) { case 1: playcomplete("SOUND1.WAV"); break; case 2: playcomplete("SOUND2.WAV"); break; case 3: playcomplete("SOUND3.WAV"); break; case 4: playcomplete("SOUND4.WAV"); break; case 5: playcomplete("SOUND5.WAV"); break; case 6: playcomplete("SOUND6.WAV"); } }); } // 5: Backpack and Cannon Assembly Putting it all together- The cannon was mounted to the backpack by bolting the main pivot assembly to an Aluminum plate on the underside of the cannon platform- this way the entire cannon mechanism could be easily removed if it was ever damaged. Holes were drilled through the backpack and the servo leads and cannon LED leads were fed through. Holes were then drilled through the backpack for the helmet laser sight power cord and the ethernet cable that connected the finger tip buttons. For the bio helmet laser sight power cord I used a barrel jack extension cord and cut off the female end. The male end connects to a matching barrel jack mounted in the side of the bio helmet. With this setup the cord just provides power to the helmet laser sight- just unplug the cord at the helmet in order to take the helmet off. The electronics board was then installed by mounting it to a wood block that was secured inside the backpack. A small speaker was then attached inside the backpack and connected to the sound board. The servo cables, ethernet cable, helmet laser sight cable and cannon LED cables were then all labeled and connected and the system was powered up. Lucky for me it powered up and ran perfectly through its motions first try- it was now very late the night before it had to ship! Woohoo! I made a few adjustments to the movements in the code to get it dialed in just right, added the small hoses that ran from the cannon to the arm and boxed it up for overnight shipping. That night I had a hard time sleeping- would it arrive safe and sound? Would it perform as it should? I wasn't going to be there to fix it if something went wrong... While I was pondering these questions Gene and Damon were working like madmen to finish the rest of the suit.... Step 6: Suit Construction- and a Small Problem A great suit is the backbone of any Predator costume- Gene was handling the creation of the suit and mask and Damon would assist him with assembly and painting. How Gene sculpt, molded and cast the entire suit and mask in just a few weeks I'll never know. And this is considering the suit was sculpted to fit 6' 7" tall Jamie, who wasn't even there at the time. Gene had taken Jamie's measurements the previous year and the fact that he was able to fit the suit to Jamie so well and have it look so spectacular is just, well, epic. There were also two masks that were made- one is a half mask that is worn under the Predator bio helmet and one is a full mask with mandibles. Gene and Damon also did a stellar paint job on the backpack and cannon, which is incredible given they were already buried with work just getting the suit ready. From Gene: "The suit up was scheduled for early to mid April, and we didn't have anything except for this gauntlet, Carl's cannon and backpack probably, and Clay Williams was making good progress on dreads, so there were about 8 weeks to get a finished product for Jamie to wear. As luck would have it, in January I bought some videos on the Stan Winston School of Character Arts site, specifically the sculpting techniques and textures, the making a creature suit, and the mask painting by Steve Wang, so I figured I had all the info I needed to make a suit and mask. NOT! I've had limited sculpting experience using fingers, wood sticks and credit cards, but the videos showed techniques, tools and explained the process, so the thought was it will make sense once I get started. In early February I made armatures out of PVC pipe, fiberglass, foam and duct tape, then bought a lot of plastilina clay. After a couple days of putting clay on armatures, shaping body parts and armor, I remember thinking to myself I DO NOT KNOW what I am doing! Which is fine if you have a few months to figure it out, but my timeline was to have these finished and molded the first week of March. The head I left for last, as I figured I would refine the techniques on the torso and legs first, then be able to detail the head better. That worked out great, as I learned a lot with the torso and legs trying new techniques and learning how alcohol and mineral spirits work to smooth things out, and the power of the "X" as Don Lanning explains it. Having the Neca 1/4 scale was invaluable and I used that for 95% of the sculpt reference. Next was molding, i used silicone since plaster inside the house was not going to be an option.The shins I only had a few days to sculpt, mold and cast so I rushed through them (and caught up on season 1,2 and 3 of the Walking Dead!) They were cast in TC 280, a flexible urethane foam that has some good rigid properties for armor. Most of the parts were cast around April 3-5, giving us about a week to trim, prep and paint everything. On the torso I cut a piece of plastic with several rare earth magnets attached for the backpack, and foamed it in. Had to take a lot of shortcuts due to time, the zipper was only super glued on, which I knew would fatigue later and did. Some of the seam work was rushed and not detailed as much as i like. All the parts were painted in 2-3 days, so I didn't put as much details in as usual. The bio, backpack and cannon arrived Thursday afternoon so those were rushed and painted in less than an hour. I was lucky to have my wife Adriana do much of the prep work while I was casting parts and painting. She cast and painted dreads, cast feet, head and legs, and several other things. Damon helped with a lot as well, cleaning parts, casting dreads, gauntlets, etc. Estelle (another Lair member) supplied the spine prop in time and it worked great to hide the zipper and add to the overall accuracy. Julie's countdown and gauntlet lid worked flawlessly, with comments by the entire Predator crew how impressed they were with it." The Hunter's Lair member Julie Spehar created the electronics for the Predator bomb-gauntlet. Julie's electronics work really shined and helped bring the Predator to life. Her gauntlet electronics are the simply the best there is. It's the details like her work that can really set a costume apart. A small problem- I received a message from Gene after the backpack had arrived- the box was pretty banged up and as a result the servo linkage that controlled the cannon up/down movement had a broken ball link. I kicked myself for not including an extra link as I had several sitting at home. Carl had already arrived in Burbank and he and another Lair member Glen were running errands around town. Carl was able to go to a local hobby shop that George Frangadakis had recommended and procure another ball link to replace the broken one. Carl used the store clerk's cell phone to send me a photo of the ball link to make sure it would fit properly. Everything was starting to come together.... Step 7: Final Assembly and Test Fitting The clock was ticking... It was now the night before the event. Late that night everyone was at the hotel in Burbank for the test fitting. I was told Jamie could barely contain his excitement as he had only seen pictures of some of the costume pieces until now. Costume pieces still had to be glued together, hoses needed to be attached, the eye mesh needed to be glued in the helmet and the backpack had to be attached to the suit- there was still an awful lot of work to do. From Carl: "Friday was the day Gene was driving up to meet us in my hotel room. As the day wore on, Gene's arrival got later and later as he was still working on the suit. I broke the news to everyone that it was going to be an all-nighter to get that suit finished for 10:00 a.m. Sat morning. No one complained. I fired up the coffee pot, drank some Red Bull, and we went to work. Glen & Emily put together the slideshow that accompanied the presentation that morning, Gene and I did a bunch of things, Shawn went to work fixing the backpack, and there was Jamie----sitting in the middle of it all with a Cheshire cat grin, marveling at us working like bees in a hive until we were finally able to do the test-fitting. There we were all working together, sharing laughs and gettin' it done. For me, THAT was the highlight of the whole trip. A bunch of us pred-heads in room 318 at the Ramada Inn, up till 4:30 a.m……. I wouldn't trade it for the world." The small problem with the backpack turned out to be a much bigger problem. Once the backpack was attached to the suit it was powered up and it worked great- once. It turns out the damage the backpack sustained during shipping was much greater than was immediately visible. The servo gear for the gear drive that raised the cannon arm had cracked. This wasn't a part you could just run down to the local hardware store and buy. Luckily another Lair member, Shawn Lindemood, was also in the room helping out. Shawn is a wiz with servos and mechanics- he has a ton of experience with RC equipment. Shawn was able to replace the previously broken servo ball link and now he would disassemble the cannon arm mechanism and remove the drive gear and try to fix it. He was able to superglue the cracked gear but it was too far damaged and soon failed after it was reassembled. Shawn was however able to rig the arm so it would stay fixed in the upright position. Next up- Monsterpalooza! Step 8: Monsterpalooza 2013- the Suit Up! I was really nervous on Saturday- I was dying to know how the event was going. As it turns out there was another problem with the backpack. The backpack was held to the suit using rare earth magnets embedded in the suit. Carl told me when Jamie went to give his sister a hug the backpack became detached and crashed to the floor, landing right on the cannon. I was mortified. Carl assured me that it was OK since it happened after the suit up on stage and it in no way affected the outcome of the event. Other than a bent bracket there didn't appear to be any other damage. From Carl: "We were all up by 8:00, ran over to the Marriot, skipped breakfast, and started setting up the room. The sound man had the Predator music Jamie gave him, the video and still camera crew were in place, and the suit was ready to go. Before long Alec Gillis, Tom Woodruff, Shannon Shea, Matt Rose and Steve Wang had entered the room and were loving all of the pred parts gathered on the stage. I remember remarking to Alec that I had a new-found appreciation for the long hours he and his compatriots have put in on projects over the years. Matt Winston was our emcee, as he and the rest of the guys recounted their experiences with Kevin Peter Hall. They were great. They truly are the salt of the earth. Jamie talked about his uncle and his desire to help boost awareness of the Predator. With that wish, he was in good company. Lastly, Gene and Damon took the mic and described the genesis of the Jamie Predator Homage Suit Project, followed by my involvement, with shout outs to Julie, Clay, and Jerome who couldn't be there…and of course, Andrew - who started this crazy place (The Hunter's Lair) where members come together to make lasting impressions like this one. Then, it was time to get Jamie dressed. But, it wasn't going to be just us, Matt had the FX guys help out, reliving their experiences helping KPH get dressed 26 years ago! It was awesome. After the ceremony, Jamie - still fully dressed, couldn't take two steps without getting swarmed by people and cameras. Sure there were some really good costumes and made-up people walking around, but when the 6' 10" Jamie-Predator came into view, that's all she wrote. "Look! The Predator!" Hearing little kids go, "Wow! Look Dad!" was as gratifying as it gets. The other noteworthy event was when Jamie approached the Stan Winston Studios booth and Matt announced to the crowd, "This is Kevin Peter Hall's nephew Jamie Hall, following in his foot steps - as the Predator!" The crowd cheered. In those hours at Monsterpalooza all of the work, the scheduling, the running around, etc. was worth it all." From Gene: "It all came down to teamwork, and desire to see Jamie's dream become a reality, which everyone here on the Lair is a part of. For me it was amazing to be in the company of these Icons of FX, Matt Rose, Steve Wang, Shannon John Shea, Tom Woodruff Jr., Alec Gillis, Matt Winston and of course Jamie Hall. To receive compliments and shake their hands, to see them laugh and have fun with Jamie putting the suit on him, and seeing Jamie transform before our eyes into Predator. Matt Winston didn't stop smiling all day I think, seeing Jamie with his sister who was there with him, and knowing his Aunt was very excited for him since it had been a dream of his since he was a child to wear a suit like uncle Kevin. To see him bring our suit to life was priceless." We actually pulled it off. I wish I could have been there to see it. Everyone was just floored by Jamie in the costume. Everyone remarked that when they saw Jamie suited up he was the Predator. And that is exactly what we were shooting for. Jamie wrote me: "I'm blessed to have been given the opportunity to honor and uphold my Uncle Kevin's legacy by performing in this incredible P1 suit that was created by a team of talented craftsman, artists and engineers from across the United States. I will be forever be grateful to each one of them for making my wildest dreams become reality!" I later exchanged emails with Alec and Tom of Amalgamated Dynamics and they wrote: So thank you Carl, Jamie, Gene, Damon, Julie, Shawn and Glen (as well as other Lair members that contributed!) In the end we, as costume creators, can build a costume to the best of our ability. But it is still just a costume- the suit performer is the person that brings the costume to life and by all accounts Jamie excelled in this regard. He made an awful lot of people very happy that day. I think his Uncle Kevin would have been a very proud man watching Jamie on stage and Stan Winston would have been thrilled that his iconic film creature would still inspire so many individuals after all these years. Step 9: The Story Continues... It's not over yet! Jamie would continue to stalk in his Uncle's footsteps. After Monsterpalooza Jamie received an invitation to appear in a video by the makers of the popular YouTube series Super Power Beat Down. He would be starring as the Predator in Episode 9: Predator vs. Wolverine! Given the time frame it was decided to send the backpack to Shawn since he would be involved with the shoot and to have him check it over to make sure it was prepped for filming. Shawn took everything apart and contacted me to tell me the extent of the damage. It turned out the main pivot bracket had cracked during the fall at Monsterpalooza as had the resin piece that held the LED inside the cannon. Shawn had already straightened the bracket that had been bent at Monsterpalooza. I immediately sent a sketch to Shawn as to what I thought would be the best way to fix the pivot bracket and Shawn got to work and had it ready to go in no time. He also repaired the LED mount and then reported back to me that the cannon was once again working perfectly and he even sent me a nice video of it. It was ready to go for Super Power Beat Down! After all, a Predator is going to need all the help he can get if he's going to have any chance of taking down Wolverine... Soon the backpack and cannon will be sent back to me for refurbishment after the wear and tear of filming and I'll fit a new cannon arm that is far more screen accurate- something I wasn't able to do previously due to cost and time constraints. I'll be sure to document it and add to the instructable when that happens. That's all folks! I hope everyone enjoyed this journey- it was a ton of work but also tremendously satisfying working with such a talented group of individuals. As always, if there are any questions just let me know! Now get together a group of friends and build some awesome costumes! Step 10: Resources The Hunter's Lair- This is the first and last place to go if you're into building a Predator costume. The Replica Prop Forum- The RPF is the granddaddy of all movie prop/costuming sites. Stan Winston School of Character Arts- Stan Winston School has of video tutorials that teach everything you could ever want to know about creating movie creatures and practical effects. If you haven't checked out their site you don't know what you're missing. Accurized Hunter Parts- Carl's site for his screen accurate Predator replicas. LightBeam 3D- Scott Andrews knows his stuff when it comes to making your 3D design a reality. Aliens FX- Gene's site for the awesome Predator suit he made. Smooth-On- Smooth-On is your one stop shop for all molding and casting materials and supplies. Adafruit Industries- A great place to purchase electronic components. I cannot say enough good things about this company. Sparkfun- Another fantastic electronics supplier! Servocity- Where I buy all my servos. Great pricing and outstanding customer service. Online Metals- Online Metals is a great place to get most any type of metal you want. They cut it to size and ship right to your door. Runner Up in the Epilog Challenge V Grand Prize in the Halloween Costume Contest 41 Discussions 1 year ago Would I be able to pay you to make me one please because I am not good at art I even struggle with stick figures. 3 years ago Hi i love this costume and i was wondering how much it would cost to buy one! Hope to hear from you soon Reply 3 years ago Sorry but I don't make these costumes for sale! 3 years ago Hi my my name is Joe from McAllen Tx and I am interested in buying it predator Canon would attract movement so I can put it in my bio helmet working on this project for many years if you have one for sale I am interested in buying this is my email address joe.r.moreno.1969@gmail.com 3 years ago Thank you for the comments! My wife and I did the majority of this suit in our 1 bedroom apartment, from sculpting, molding, casting and painting. It was a privilege building the suit for my friend Jamie Hall, as a tribute to his uncle Kevin Peter Hall 3 years ago StanWinstonSchool.com was a big help in our sculpting, molding,casting and painting ofthe entire suit 3 years ago very nice job on this. one day i will make my own. 3 years ago Can you please share the 3d files for the cannon! I'm bulding my own suit now for halloween and dont have the time to make my own cannon. I got my own 3D-printer and would like to print a cannon. Reply 3 years ago Unfortunately they are not my files to share but I will certainly ask the creator! 4 years ago Hey bud! Can you make one for me? LoL Reply 4 years ago on Introduction Sorry I can't- too many other projects to do already! :) 5 years ago on Introduction Amazing work ... Congratz 5 years ago on Step 5 great instructable... 5 years ago this is one of my favorite instructables! Reply 5 years ago Thanks! 5 years ago on Introduction Congrats on your Grand Prize! Amazing project! Reply 5 years ago on Introduction Thanks so much!! I couldn't believe it when I found out- there were so many awesome entries! Reply 5 years ago on Introduction But apparently, the Predator was the awesomest :-) 5 years ago on Introduction Fantastic work and great documentation. Congrats on this year's win! Reply 5 years ago on Introduction Thanks so much!! I was really floored when I found out- there were so many awesome entries!
https://www.instructables.com/id/Building-a-killer-Predator-costume/
CC-MAIN-2019-26
refinedweb
7,861
66.78
Hi to all, I am not sure, if this is the correct forum for my question, so please move it if necessary. I need to change the I2C pins. I use a color TFT display (), which uses the standard SDA pin (A4) for LCD_RST. I could swap to Arduino Mega, but this board is too large (and uses too much current), I want to realize my project on an Arduino Mini Pro. I found that there are some "bit banging" libraries which replace the Wire library, so BitBang_I2C () I used the following code to test the library and the device. (Hardware: AS7262 Breakout Board, SDA connected with D10, SCL connected with D11 of an Mini Pro 328, 5V) When running the program I got an "0 device(s) found", when disconnecting the Breakout Board I got "I2C pins are not correct or the bus is being pulled low by a bad device; unable to run scan" Can you help me to find out what was wrong? // // I2C Detector - scan and identify devices on an I2C bus using the BitBang_I2C library // // The purpose of this code is to provide a sample sketch which can serve // to detect not only the addresses of I2C devices, but what type of device each one is. // So far, I've added the 25 devices I've personally used or found to be reliably detected // based on their register contents. I encourage people to do pull requests to add support // for more devices to make this code have wider appeal. // There are plenty of I2C devices which appear at fixed addresses, yet don't have unique // "Who_Am_I" registers or other data to reliably identify them. It's certainly possible to // write code which initializes these devices and tries to verify their identity. This can // potentially damage them and would necessarily increase the code size. I would like to keep // the size of this code small enough so that it can be included in many microcontroller // projects where code space is scarce. // Copyright (c) 2019 BitBank Software, Inc. // Written by Larry Bank // email: bitbank@pobox.com // Project started 25/02 <>. // // Uses my Bit Bang I2C library. You can find it here: // #include <BitBang_I2C.h> #define SDA_PIN 10 #define SCL_PIN 11 // // If you don't need the explicit device names displayed, disable this code by // commenting out the next line // //#define SHOW_NAME #ifdef SHOW_NAME const char *szNames[] = {"Unknown","SSD1306","SH1106","VL53L0X","BMP180", "BMP280","BME280", "MPU-60x0", "MPU-9250", "MCP9808","LSM6DS3", "ADXL345", "ADS1115","MAX44009", "MAG3110", "CCS811", "HTS221", "LPS25H", "LSM9DS1","LM8330", "DS3231", "LIS3DH", "LIS3DSH","INA219","SHT3X","HDC1080","MPU6886","BME680", "AXP202", "AXP192", "24AA02XEXX", "DS1307", "MPU688X", "FT6236G", "FT6336G", "FT6336U", "FT6436", "BM8563","BNO055"}; #endif BBI2C bbi2c; void setup() { Serial.begin(115200); memset(&bbi2c, 0, sizeof(bbi2c)); bbi2c.bWire = 0; // use bit bang, not wire library bbi2c.iSDA = SDA_PIN; bbi2c.iSCL = SCL_PIN; I2CInit(&bbi2c, 100000L); delay(100); // allow devices to power up } void loop() { uint8_t map[16]; uint8_t i; int iDevice, iCount; Serial.println("Starting I2C Scan"); I2CScan(&bbi2c, map); // get bitmap of connected I2C devices if (map[0] == 0xfe) // something is wrong with the I2C bus { Serial.println("I2C pins are not correct or the bus is being pulled low by a bad device; unable to run scan"); } else { iCount = 0; for (i=1; i<128; i++) // skip address 0 (general call address) since more than 1 device can respond { if (map[i>>3] & (1 << (i & 7))) // device found { iCount++; Serial.print("Device found at 0x"); Serial.print(i, HEX); iDevice = I2CDiscoverDevice(&bbi2c, i); Serial.print(", type = "); #ifdef SHOW_NAME Serial.println(szNames[iDevice]); // show the device name as a string #else Serial.println(iDevice); // show the device name as the enum index #endif } } // for i Serial.print(iCount, DEC); Serial.println(" device(s) found"); } delay(5000); } Thank you, -richard
https://forum.arduino.cc/t/changing-i2c-pins-with-library-bitbang-i2c/927379
CC-MAIN-2021-49
refinedweb
626
62.07
- Merge Two Files This article is created to cover a program in Java that merge the content of two files into the third file, entered by user at run-time of the program. What to Do before Program ? Since the program given below merges the content of two files into the third file. Therefore, we need to create two files with some content, to merge the content of these two files into a third file using the Java program. If the third file, in which the merged content is going to write, is not available, inside the current directory. Then a new file with the name provided as the name of third file, will get created. And if the file is already available, then the merged content will get appended into the file. Here is the snapshot of the current directory, including three opened files, in my case: Using the sample run of the program given below, the content of codes.txt and cracker.txt file will get merged into the codescracker.txt file, because I'm going to provide the name of these files. Merge Content of Two Files into Third File in Java The question is, write a Java program to merge the content of two files into a third file. The name of all the three files must be received by user at run-time of the program. The program given below is its answer: import java.io.*; import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { String fileOne, fileTwo, fileThree, line, content=""; Scanner scan = new Scanner(System.in); System.out.print("Enter the Name of First File: "); fileOne = scan.nextLine(); System.out.print("Enter the Name of Second File: "); fileTwo = scan.nextLine(); System.out.print("Enter the Name of Third File: "); fileThree = scan.nextLine(); try { FileReader frOne = new FileReader(fileOne); BufferedReader brOne = new BufferedReader(frOne); FileReader frTwo = new FileReader(fileTwo); BufferedReader brTwo = new BufferedReader(frTwo); for(line=brOne.readLine(); line!=null; line=brOne.readLine()) content = content + line + "\n"; brOne.close(); for(line=brTwo.readLine(); line!=null; line=brTwo.readLine()) content = content + line + "\n"; brTwo.close(); try { FileWriter fw = new FileWriter(fileThree, true); fw.write(content); fw.close(); System.out.println("\nSuccessfully merged the content of two files into the third file"); } catch(IOException ioe) { System.out.println("\nSomething went wrong!"); System.out.println("Exception: " +ioe); } } catch(IOException ioe) { System.out.println("\nSomething went wrong!"); System.out.print("Exception: " +ioe); } } } The snapshot given below shows the sample run of above program with user input codes.txt as name of first, cracker.txt as name of second, and codescracker.txt as name of third file, to merge the content of first two files into the codescracker.txt file: Now if you open the file codescracker.txt, then the file contains the content of first and second file. Here is the new snapshot of the same directory, with all three files: The program to merge two files into third, in Java, can also be written as: import java.io.*; import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { String fileOne, fileTwo, merge; Scanner scan = new Scanner(System.in); System.out.print("Enter the Name of First File: "); fileOne = scan.nextLine(); System.out.print("Enter the Name of Second File: "); fileTwo = scan.nextLine(); System.out.print("Enter the Name of Third File: "); merge = scan.nextLine(); File[] files = new File[2]; files[0] = new File(fileOne); files[1] = new File(fileTwo);) { System.out.println("Exception: " +e1); } System.out.println("\nMerging the file...");.println("\nMerged Successfully!"); try { out.close(); } catch(IOException e) { System.out.println("Exception: " +e); } } } Same Program in Other Languages « Previous Program Next Program »
https://codescracker.com/java/program/java-program-merge-two-files.htm
CC-MAIN-2022-21
refinedweb
612
59.19
Check with another constructor for htmlView: ContentType mimeType = new System.Net.Mime.ContentType("text/html"); var htmlView = AlternateView.CreateAlternateViewFromString(bodyMessage, mimeType); As @Sukrit Kalra says, don't use datetime.py as your file name. Python is getting confused with which datetime is which (and is importing itself!). Maybe; $ mv datetime.py my_datetime.py Documentation says: cmd is actually run as: { cmd ; } 2>&1 And the function getstatusoutput() is available on UNIX not on Windows. If you use subprocess, you don't need to call any external utilities. subprocess.Popen class provides terminate method. In order to use it, you'll need to replace subprocess.call(...) with subprocess.Popen(...), which returns a Popen instance. For example, task = subprocess.Popen('launcher.exe localhost filename.dut', shell=True) # some time later task.terminate() Note that, unlike call, plain Popen doesn't wait for process to complete. Consult the manual for more details: Have you tried -s is the switch for setting filtering spec to silent by default? logcat -s <tag>:<level> There are lot's of questions like this one on SO already. Filter output in logcat by tagname Most likely the program starts its own shell and does no longer interact with the original one. (You would notice this if the program opens a new window) Or the program needs some specific library to be present to be able to interact with a shell (readline seems to be the case here) and that is not present in your Java Environment. As a quick hack you might try to start bash (or cmd) that then starts the tool. bash and cmd have readline library. I don't have a windows ready here but as a guess just try to call your program like cmd urjtag.exe instead of just urjtag.exe that way you start a cmd process (with that you can interact) and that cmd starts the urjtag.exe where you already know that it can interact with. Either way the problem lies in the way the program you want to call intera') If there is an error/exception your program is supposed to undo the changes to the terminal which were done during the initialization of curses. If you don't want to do that, there is a wrapper function provided. Please check the documentation. In your case, instead of calling main directly you can call through the wrapper. curses.wrapper(main). The python signal handlers do not seem to be real signal handlers; that is they happen after the fact, in the normal flow and after the C handler has already returned. Thus you'd try to put your quit logic within the signal handler. As the signal handler runs in the main thread, it will block execution there too. Something like this seems to work nicely. import signal import time import sys def run_program(): while True: time.sleep(1) print("a") def exit_gracefully(signum, frame): # restore the original signal handler as otherwise evil things will happen # in raw_input when CTRL+C is pressed, and our signal handler is not re-entrant signal.signal(signal.SIGINT, original_sigint) try: if raw_input(" Really quit? (y/n)> ").lower().startswith This seems to work on Python3/Linux import sys print("idlelib" in sys.modules) If will return True if the script is run from Idle, False otherwise. Please test for other combination of Python/OS ! If you just want to run an application with a specific working directory, the easiest way is to use a ProcessBuilder: ProcessBuilder pb = new ProcessBuilder(executable, arguments, if, any); pb.directory(theWorkingDirectory); pb.start(); I encountered a similar error message a while back. If I remember correctly, the following was the crux of the issue: (it may not be 100% relevant for the OP, but it might be useful to somebody who hits this down the line). The problem was that my unit tests all failed in Release mode with an exception complaining about the availability (or lack thereof) of the SQLite.Interop.dll. I realised that when built in Debug mode, the binDebug folder had 2 sub folders (x64 and x86) each with a copy of SQLite.Interop.dll but, in Release mode these files/folders did not exist. To resolve it, I created the x64 and x86 folders in my project and added the appropriate version of SQLite.Interop.dll too them, setting the Copy to ouput setting to Copy if newer. (I had originally used 'Copy always' but it You need to setup your PATH, INCLUDE and LIB environment variables. You could do that by running D:Program Files (x86)Microsoft Visual Studio 10.0VCvcvarsall.bat (or wherever it is located on your installation) in the same prompt you're running vim, or even manually setting these environment variables (here is a list of all values for VS2008 and Windows SDK: Using Visual Studio's 'cl' from a normal command line). However, you just can't run the bat file from vim directly, because it will open another Prompt, so the environment variables will be set only for that new prompt. The other option is just to create bat file which you can put it inside your PATH, for example cvim: call "D:Program Files (x86)Microsoft Visual Studio 10.0VCvcvarsall.bat" "C:Program Files (x86)Vimvim74g In Windows file names (including path) can not be greater than 255 characters, so the error you're seeing comes from Windows, not from Python - because somehow you managed to create such big file names, but now you can't read them. See this post for more details. According to their documentation you can't directly use it with Python. However, you can use Jython (Python for the Java platform). Things you will have to do if you use Eclipse IDE: Go to Eclipse Marketplace (Help > Eclipse Marketplace) and look for PyDev plugin. Install it. You will download Jython-standalone jar file and configure Jython interpreter: Window > Preferences > PyDev > Interpreter-Jython. Click New. Type for interpreter name: Jython. For Interpreter executable, click browse and browse to Jython-standalone jar file. (Click OK, NEXT...) To create a Jython project: New > PyDevProject. Enter name of project. Under "Chose project type" select Jython. Under "Interpreter" choose Jython. Click OK. Now, how to add burp.jar to buildpath. Right click on your project > Properties > You probably need to close after writing : test= open("bla/bla/bla/" + text + ".txt", 'a') test.write("bla bla bla") test.close() This is always a good practice, if you don't the result can be quite randomized... If you want to know How do I run a .py file from the Python interpreter? this will work import sys sys.path.append("C:\Users\Myname\Desktop\Python") import Python-Test But your question says from the command line, which has been answered in the comments. You are trying to execute the program from the node prompt. You don't do that. You just run the node terminal. It sets up a bunch of variables for you. Just run it like you do in the ordinary windows shell. Windows does not actually support passive mode. You can send the command to the server in three different ways but that will not enable passive mode on the Windows client end. Those arguments are for sending various commands and pasv is not something that Microsoft thought of when they wrote it. You will have to find a 3rd party software like WinSCP that supports command line usage and use that instead of the Windows native one. For a beginner programmer that doesn't have admin rights on his/her computer, I'd recommend the Eclipse IDE. Since you already have the JDK the only installation step needed requires no admin rights. From here you must download the "Eclipse Standard" option, and you will get a very large zip archive. You can extract it onto the desktop or my documents. Windows comes with a utility to do this via drag-and-drop right from the explorer or your machine may have another program such as WinRAR installed to do this. You can then run eclipse.exe from the place where you extracted it by browsing to, and double-clicking this file. The IDE is very powerful and self-explanatory. You can create projects, run, and debug code, and it's nice for beginners. It's truly worth the long wait in downloadin ssh access works fine from a regular DOS session. You only need to define C:UsersYourAccount.ssh and add your id_rsa and id_rsa.pub there. Launch your git session through git-cmd.bat, which will define %HOME% to your C:UsersYourAccount: that is what will make ssh work. This should put your msysgit/bin installation in your PATH. I really recommend not installing through a msi (Microsoft Installer), but through a simple unzip of an archive (portable version "PortableGit-x.y.z-preview201ymmdd.7z") And the OP GreenAsJade's comment points out the fact that GIT_SSH must point to plink.exe. Your path configuration has a M2_Home instead of M2_HOME. It should be all uppercase %M2_HOME% Also notice, you are specifying bin twice, should M2_HOME=C:Program Filesapache-maven-3.0.5-binapache-maven-3.0.5 Understanding class path wildcards. In the version 6.0 the bin directory is missing the scripts which run javacc. That is why you are getting the error from the windows command prompt. What you have is a jar file javacc.jar located in the lib directory. All you need is to add that jar file to your classpath and run the java.exe and pass the main class which runs javacc, the later happens to be named javacc too, so to run javacc just proceed like this: cmd> java -cp C:javacc-6.0inlibjavacc.jar javacc In the latest version they seem to have forgotten to add the scripts in the bin folder of the package. You can download version 5.0, it containes all the script files you need, among others a file with the name javacc.bat, this is the one the window commad prompt is looking for and not finding in your case. Of course, y I would define a launcher.bat where I deal will all R-paths problem: PATH PATH_TO_R/R-version/bin;%path% cd PATH_TO_R_SCRIPT Rscript myscript.R arg1 arg2 Then in the php side you can use exec: <?php exec('c:WINDOWSsystem32cmd.exe /c START PATH_TO_LAUNCHERLAUNCHER.bat'); ?> Found a solution [though open for better suggestions] First needs to check and kill if php tasks exists, then command prompt will kill : // It will first list out all `php.exe` tasks running $output = shell_exec('tasklist /svc /fi "imagename eq php.exe"'); print_r($output); echo "<br> ------------------------------------ <br>"; // It will first list out all `cmd.exe` tasks running $output = shell_exec('tasklist /svc /fi "imagename eq cmd.exe"'); print_r($output); echo "<br> ------------------------------------ <br>"; // kills php.exe tasks $php_output = shell_exec('taskkill /F /IM "php.exe"'); print_r($output); echo "<br> ------------------------------------ <br>"; // kills cmd.exe tasks $cmd_output = shell_exec('taskkill /F /IM "cmd.exe"'); print_r See if this works for you: it will report the last match found. @echo off for %%a in (c d e f g h i j k l m n o p q u r s t u v w x y z) do ( if exist "%%a:" dir %%a: /ad /b /s >>"%userprofile%desktopfolderlist.txt" ) find /i "opencvuildin" < "%userprofile%desktopfolderlist.txt" >"%userprofile%desktopfolderlistfound.txt" if exist "%userprofile%desktopfolderlistfound.txt" ( for /f "usebackq delims=" %%a in ("%userprofile%desktopfolderlistfound.txt") do ( set "foundfolder=%%a" echo found at "%%a" ) ) if defined foundfolder ( echo last folder matched at "%foundfolder%" ) else ( echo didn't find the "opencvuildin" ): I believe you are looking for the /B flag -- which tells Start to not open a new window but instead run the command in the background of the same window. Remember though: You are still starting a new instance of start each time. I'm assuming this is what you intend, however. Make sure this isn't a path issue, as explained in this link (for windows, but valid for other OS too). Check also if you don't have any active git alias which might prevent a git pull to work properly. Regarding the path issue, the OP aladine confirms in the comments: I discover that after I reinstall git, it works as normal. . Back in 2013, that was not possible. MS provided no executable for this. See this link for some VBS way to do this. From Windows 8 on, .Net Framework 4.5 is installed by default, with System.IO.Compression.ZipArchive and PowerShell available, one can write scripts to achieve this, see
http://www.w3hello.com/questions/What-would-cause-ldquo-python-exe-rdquo-to-work-while-ldquo-python-rdquo-fails-in-Windows-command-prompt-
CC-MAIN-2018-17
refinedweb
2,107
75
import "gonum.org/v1/gonum/optimize/convex/lp" Package lp implements routines to solve linear programming problems. package lp implements routines for solving linear programs. convert.go doc.go simplex.go var ( ErrBland = errors.New("lp: bland: all replacements are negative or cause ill-conditioned ab") ErrInfeasible = errors.New("lp: problem is infeasible") ErrLinSolve = errors.New("lp: linear solve failure") ErrUnbounded = errors.New("lp: problem is unbounded") ErrSingular = errors.New("lp: A is singular") ErrZeroColumn = errors.New("lp: A has a column of all zeros") ErrZeroRow = errors.New("lp: A has a row of all zeros") ) func Convert(c []float64, g mat.Matrix, h []float64, a mat.Matrix, b []float64) (cNew []float64, aNew *mat.Dense, bNew []float64) Convert converts a General-form LP into a standard form LP. The general form of an LP is: minimize cᵀ * x s.t G * x <= h A * x = b And the standard form is: minimize cNewᵀ * x s.t aNew * x = bNew x >= 0 If there are no constraints of the given type, the inputs may be nil. func Simplex(c []float64, A mat.Matrix, b []float64, tol float64, initialBasic []int) (optF float64, optX []float64, err error) Simplex solves a linear program in standard form using Danzig's Simplex algorithm. The standard form of a linear program is: minimize cᵀ x s.t. A*x = b x >= 0 . The input tol sets how close to the optimal solution is found (specifically, when the maximal reduced cost is below tol). An error will be returned if the problem is infeasible or unbounded. In rare cases, numeric errors can cause the Simplex to fail. In this case, an error will be returned along with the most recently found feasible solution. The Convert function can be used to transform a general LP into standard form. The input matrix A must have at least as many columns as rows, len(c) must equal the number of columns of A, and len(b) must equal the number of rows of A or Simplex will panic. A must also have full row rank and may not contain any columns with all zeros, or Simplex will return an error. initialBasic can be used to set the initial set of indices for a feasible solution to the LP. If an initial feasible solution is not known, initialBasic may be nil. If initialBasic is non-nil, len(initialBasic) must equal the number of rows of A and must be an actual feasible solution to the LP, otherwise Simplex will panic. A description of the Simplex algorithm can be found in Ch. 8 of Strang, Gilbert. "Linear Algebra and Applications." Academic, New York (1976). For a detailed video introduction, see lectures 11-13 of UC Math 352. Package lp imports 5 packages (graph) and is imported by 3 packages. Updated 2019-09-09. Refresh now. Tools for package owners.
https://godoc.org/gonum.org/v1/gonum/optimize/convex/lp
CC-MAIN-2019-39
refinedweb
476
59.6
perltutorial RMGir [] is the Perl Object Environment, a cooperative multitasking framework that makes it easy to answer questions like [id://282322|How do I make STDIN time out]. <p> I only started looking at POE this week, because the [id://281219] thread piqued my interest. That thread points to [] [] [] on POE,but I thought we should have one here as well. Also, putting this together is helping me figure POE a bit better, and giving me ideas for more fun areas to explore. I think POE and (Tk|Curses) will be next... <p> POE lets you write programs that handle input from multiple asynchronous sources. Those are big words, but they just mean you'll get the info when it's available, and you don't have to worry about waiting for it. :) <p> The cooperation is done by creating a set of states, which are invoked by events. Events are generated by input engines (called Wheels), by timers, or by other states. <br /><readmore><br /> At the heart of POE lies to POE kernel. It keeps a queue of timed events, and uses the OS's select or poll functionality to watch any file handles or sockets you're interested in for activity. When it's time for an event to fire, or data is available, the associated state handler is invoked. Other event loops are also available, making it possible to have POE programs that have [id://278704|Tk] or curses GUIs, for instance. <p> The sample this tutorial is built around is my answer to the STDIN timeout question asked above. You'll see that it's very easy to write an interactive application in POE with command-line editing and history. <p> The first step in any POE program is using the POE module. Since POE programs often need to use other modules from the POE:: namespace, you can do <p> <pre> use POE qw/Wheel::ReadLine Component::Client::HTTP::SSL/; </pre> as a shortcut for <pre> use POE; use POE::Wheel::ReadLine; use POE::Component::Client::HTTP::SSL; </pre> <p> In this case, we only need POE::Wheel::ReadLine, which will handle our input requirements. <p> Each program consists of one or more POE Sessions, which hold a set of states. <code> #!, } ); </code> In the session's constructor, we specify a hash of state names, and the state handlers associated with them. The subroutines can be named, as they are here, or they can be anonymous sub references. <p> The _start and _stop states are special; they are invoked by the kernel when the session is created, or just before it's destroyed. <p> In this case, we don't need to do anything special to handle _stop, so that state is commented out, and the handler isn't implemented. <p> That means that your _start handler will be invoked before POE::Session->create returns. <p> The next step is to start the kernel running, and exit the program once it's done. <p> <code> $poe_kernel->run(); exit; </code> $poe_kernel is exported by POE automatically. <p> Of course, we haven't DEFINED any state handlers yet, so our program won't even compile, let alone run. <p> Every POE state handler is passed a large number of arguments in @_. The most interesting ones are the heap, kernel, and session associated with this state. <p> The poe heap is just a scalar, which usually is used as a hash reference. <p> These values can be accessed using an array slice on @_ to initialize scope variables, or explicitly referred to as $_[HEAP], $_[KERNEL], or $_[SESSION]. POE exports HEAP, KERNEL, SESSION, and several other constants automatically. <p> The first handler we'll see is our start handler: <code>'); } </code> POE's wheels are modules which handle the hassle of gluing outside event generators, like sockets or file handles, to POE states. <p> POE::Wheel::ReadLine is a special wheel which invokes states when data is input on the console. It also handles command line editing and history, with a bit of help from us. <p>. <code> sub handler_prompt { my ($kernel, $heap, $session) = @_[KERNEL, HEAP, SESSION]; print "You have 10 seconds to enter something, or I'll quit!$/"; $heap->{wheel}->get('Type fast: '); # this will make the timeout event fire in 10 seconds $kernel->delay('timeout',10); } </code> All this handler does is use the get method on our Wheel to prompt the user for input, and then schedule a timeout event in 10 seconds. <p> Even if this isn't the first time this handler is invoked, calling delay removes the old event, and schedules a new one 10 seconds out. <p> From here, things are in the hands of the kernel. If the user does nothing, in 10 seconds (or so, timeouts are approximate) the timeout state will be triggered: <code>; } </code> When the wheel member is undefined in handler_timeout, the wheel is destroyed, and since there are no pending events and no event sources, the kernel exits. <p> If the user does enter something, or hits Ctrl-C, the gotLine handler is called. <code>'); } </code> One thing that's interesting here is that we read the ARG0 and ARG1 members of @_. POE passes the arguments last in @_, ranging from ARG0 to $#_. In the case of an InputHandler for this wheel, ARG0 is the text input, and if ARG0 is undef, ARG1 is the exception code. <p> After "handling" the input, this handler yields back to the prompt handler, which reschedules the timeout and prompts the user again. </readmore><p> I hope this quick walkthru of a simple POE program will help you understand how POE operates. The other tutorials and beginners guides I linked up above are even better, so you should be up and running from state to state in no time. :) <p><small>Edit by [tye], add READMORE</small></p>
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=282468
CC-MAIN-2016-18
refinedweb
985
68.5
In a previous article, I presented a maximum entropy modeling library called SharpEntropy, a C# port of a mature Java library called the MaxEnt toolkit. The Java MaxEnt library is used by another open source Java library, called OpenNLP, which provides a number of natural language processing tools based on maximum entropy models. This article shows you how to use my C# port of the OpenNLP library to generate parse trees for English language sentences, as well as explores some of the other features of the OpenNLP code. Please note that because the original Java OpenNLP library is published under the LGPL license, the source code to the C# OpenNLP library available. OpenNLP is both the name of a group of open source projects related to natural language processing (NLP), and the name of a library of NLP tools written in Java by Jason Baldridge, Tom Morton, and Gann Bierner. My C# port is based upon the latest version (1.2.0) of the Java OpenNLP tools, released in April 2005. Development of the Java library is ongoing, and I hope to update the C# port as new developments occur. Tools included in the C# port are: a sentence splitter, a tokenizer, a part-of-speech tagger, a chunker (used to "find non-recursive syntactic annotations such as noun phrase chunks"), a parser, and a name finder. The Java library also includes a tool for co-reference resolution, but the code for this feature is in flux and has not yet been ported to C#. All of these tools are driven by maximum entropy models processed by the SharpEntropy library. Since this article was first written, the coreference tool has been ported to C# and is available, along with the latest version of the other tools, from the SharpNLP Project on CodePlex. Since this article was first written, the required binary data files have now been made available for download from the SharpNLP Project on CodePlex. Instead of downloading the Java-compatible files from Sourceforge and then converting them via the ModelConverter tool, you can download them directly in the required .nbin format. The maximum entropy models that drive the OpenNLP library consist of a set of binary data files, totaling 123 MB. Because of their large size, it isn't possible to offer them for download from CodeProject. Unfortunately, this means that setting up the OpenNLP library on your machine requires more steps than simply downloading the Zip file, unpacking, and running the executables. First, download the demo project Zip file and unzip its contents into a folder on your hard disk. Then, in your chosen folder, create a subfolder named "Models". Create two subfolders inside "Models", one called "Parser" and one called "NameFind". Secondly, download the OpenNLP model files from the CVS repository belonging to the Java OpenNLP library project area on SourceForge. This can be done via a CVS client, or by using the web interface. Place the .bin files for the chunker (EnglishChunk.bin), the POS tagger (EnglishPOS.bin), the sentence splitter (EnglishSD.bin), and the tokenizer (EnglishTok.bin) in the Models folder you created in the first step. This screenshot shows the file arrangement required: Place the .bin files for the name finder into the NameFind subfolder, like this: Then, place the files required for the parser into the Parser subfolder. This includes the files called "tagdict" and "head_rules", as well as the four .bin files: These models were created by the Java OpenNLP team in the original MaxEnt format. They must be converted into .NET format for them to work with the C# OpenNLP library. The article on SharpEntropy explains the different model formats understood by the SharpEntropy library and the reasons for using them. The command line program ModelConverter.exe is provided as part of the demo project download for the purpose of converting the model files. Run it from the command prompt, specifying the location of the "Models" folder, and it will take each of the .bin files and create a new .nbin file from it. This process will typically take some time - several minutes or more, depending on your hardware configuration. (This screenshot, like the folder screenshots above it, is taken from the Windows 98 virtual machine I used for testing. Of course, the code works on newer operating systems as well - my main development machine is Windows XP.) Once the model converter has completed successfully, the demo executables should run correctly. As well as the ModelConverter, the demonstration project provides two Windows Forms executables: ToolsExample.exe and ParseTree.exe. Both of these use OpenNLP.dll, which in turn relies on SharpEntropy.dll, the SharpEntropy library which I explored in my previous article. The Parse Tree demo also uses (a modified version of) the NetronProject's treeview control, called "Lithium", available from CodeProject here The Tools Example provides a simple interface to showcase the various natural language processing tools provided by the OpenNLP library. The Parse Tree demo uses the modified Lithium control to provide a more graphical demonstration of the English sentence parsing achievable with OpenNLP. The source code is provided for the two Windows Forms executables, the ModelConverter program, and the OpenNLP library (which is LGPL licensed). Source code is also included for the modified Lithium control, though the changes to the original CodeProject version are minimal. Source code for the SharpEntropy library can be obtained from my SharpEntropy article. The source code is written so that the EXEs look for the "Models" folder inside the folder they are running from. This means that if you are running the projects from the development environment, you will either need to place the "Models" subfolder inside the appropriate "bin" directory created when you compile the code, or change the source code to look for a different location. This is the relevant code, from the MainForm constructor: MainForm mModelPath = System.IO.Path.GetDirectoryName( System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); mModelPath = new System.Uri(mModelPath).LocalPath + @"\Models\"; This could be replaced with your own scheme for calculating the location of the Models folder. The OpenNLP code is set up to use a SharpEntropy.IO.IGisModelReader implementation that holds all of the model data in memory. This is unlikely to cause problems when using some of the simple tools, such as the sentence splitter or the tokenizer. More complex tools, such as the parser and name finder, use several large models. The maximum entropy model data for the English parser consumes approximately 250 MB of memory, so I would recommend that you use appropriately powerful hardware when running this code. If your PC runs out of physical memory and starts using the hard disk instead, you will obviously experience an extreme slowdown in performance. SharpEntropy.IO.IGisModelReader If we have a paragraph of text in a string variable input, a simple and limited way of dividing it into sentences would be to use input.Split('.') to obtain an array of strings. Extending this to input.Split('.', '!', '?') would handle more cases correctly. But while this is a reasonable list of punctuation characters that can end sentences, this technique does not recognize that they can appear in the middle of sentences too. Take the following simple paragraph: input input.Split('.') input.Split('.', '!', '?') Mr. Jones went shopping. His grocery bill came to $23.45. Using the Split method on this input will result in an array with five elements, when we really want an array with only two. We can do this by treating each of the characters '.', '!', '?' as potential rather than definite end-of-sentence markers. We scan through the input text, and each time we come to one of these characters, we need a way of deciding whether or not it marks the end of a sentence. This is where the maximum entropy model comes in useful. A set of predicates related to the possible end-of-sentence positions is generated. Various features, relating to the characters before and after the possible end-of-sentence markers, are used to generate this set of predicates. This set of predicates is then evaluated against the MaxEnt model. If the best outcome indicates a sentence break, then the characters up to and including the position of the end-of-sentence marker are separated off into a new sentence. Split '.' '!' '?' All of this functionality is packaged into the classes in the OpenNLP.Tools.SentenceDetect namespace, so all that is necessary to perform intelligent sentence splitting is to instantiate an EnglishMaximumEntropySentenceDetector object and call its SentenceDetect method: OpenNLP.Tools.SentenceDetect EnglishMaximumEntropySentenceDetector SentenceDetect using OpenNLP.Tools.SentenceDetect; EnglishMaximumEntropySentenceDetector sentenceDetector = new EnglishMaximumEntropySentenceDetector(mModelPath + "EnglishSD.nbin"); string[] sentences = sentenceDetector.SentenceDetect(input); The simplest EnglishMaximumEntropySentenceDetector constructor takes one argument, a string containing the file path to the sentence detection MaxEnt model file. If the text shown in the simple example above is passed into the SentenceDetect method, the result will be an array with two elements: "Mr. Jones went shopping." and "His grocery bill came to $23.45." string The Tools Example executable illustrates the sentence splitting capabilities of the OpenNLP library. Enter a paragraph of text into the top textbox, and click the "Split" button. The split sentences will appear in the lower textbox, each on a separate line. Having isolated a sentence, we may wish to apply some NLP technique to it - part-of-speech tagging, or full parsing, perhaps. The first step in this process is to split the sentence into "tokens" - that is, words and punctuations. Again, the Split method alone is not adequate to achieve this accurately. Instead, we can use the Tokenize method of the EnglishMaximumEntropyTokenizer object. This class, and the related classes in the OpenNLP.Tools.Tokenize namespace, use the same method for tokenizing sentences as I described in the second half of the Sharpentropy article, which I won't repeat here. As with the sentence detection classes, using this functionality is as simple as instantiating a class and calling a single method: Tokenize EnglishMaximumEntropyTokenizer OpenNLP.Tools.Tokenize using OpenNLP.Tools.Tokenize; EnglishMaximumEntropyTokenizer tokenizer = new EnglishMaximumEntropyTokenizer(mModelPath + "EnglishTok.nbin"); string[] tokens = tokenizer.Tokenize(sentence); This tokenizer will split words that consist of contractions: for example, it will split "don't" into "do" and "n't", because it is designed to pass these tokens on to the other NLP tools, where "do" is recognized as a verb, and "n't" as a contraction of "not", an adverb modifying the preceding verb "do". The "Tokenize" button in the Tools Example splits text in the top textbox into sentences, then tokenizes each sentence. The output, in the lower textbox, places pipe characters between the tokens. Part-of-speech tagging is the act of assigning a part of speech (sometimes abbreviated POS) to each word in a sentence. Having obtained an array of tokens from the tokenization process, we can feed that array to the part-of-speech tagger: using OpenNLP.Tools.PosTagger; EnglishMaximumEntropyPosTagger posTagger = new EnglishMaximumEntropyPosTagger(mModelPath + "EnglishPOS.nbin"); string[] tags = mPosTagger.Tag(tokens); The POS tags are returned in an array of the same length as the tokens array, where the tag at each index of the array matches the token found at the same index in the tokens array. The POS tags consist of coded abbreviations conforming to the scheme of the Penn Treebank, the linguistic corpus developed by the University of Pennsylvania. The list of possible tags can be obtained by calling the AllTags() method; here they are, followed by the Penn Treebank description: AllTags() CC Coordinating conjunction RP Particle CD Cardinal number SYM Symbol DT Determiner TO to EX Existential there UH Interjection FW Foreign word VB Verb, base form IN Preposition/subordinate VBD Verb, past tense conjunction JJ Adjective VBG Verb, gerund/present participle JJR Adjective, comparative VBN Verb, past participle JJS Adjective, superlative VBP Verb, non-3rd ps. sing. present LS List item marker VBZ Verb, 3rd ps. sing. present MD Modal WDT wh-determiner NN Noun, singular or mass WP wh-pronoun NNP Proper noun, singular WP$ Possessive wh-pronoun NNPS Proper noun, plural WRB wh-adverb NNS Noun, plural `` Left open double quote PDT Predeterminer , Comma POS Possessive ending '' Right close double quote PRP Personal pronoun . Sentence-final punctuation PRP$ Possessive pronoun : Colon, semi-colon RB Adverb $ Dollar sign RBR Adverb, comparative # Pound sign RBS Adverb, superlative -LRB- Left parenthesis * -RRB- Right parenthesis * * The Penn Treebank uses the ( and ) symbols, but these are used elsewhere by the OpenNLP parser. The maximum entropy model used for the POS tagger was trained using text from the Wall Street Journal and the Brown Corpus. It is possible to further control the POS tagger by providing it with a POS lookup list. There are two alternative EnglishMaximumEntropyPosTagger constructors that specify a POS lookup list, either by a filepath or by a PosLookupList object. The standard POS tagger does not use a lookup list, but the full parser does. The lookup list consists of a text file with a word and its possible POS tags on each line. This means that if a word in the sentence you are tagging is found in the lookup list, the POS tagger can restrict the list of possible POS tags to those specified in the lookup list, making it more likely to choose the correct tag. EnglishMaximumEntropyPosTagger PosLookupList The Tag method has two versions, one taking an array of strings and a second taking an ArrayList. In addition to these methods, the EnglishMaximumEntropyPosTagger also has a TagSentence method. This method bypasses the tokenizing step, taking in an entire sentence, and relying on a simple Split to find the tokens. It also produces the result of the POS tagging, with each token followed by a '/' and then its tag, a format often used for the display of the results of POS tagging algorithms. Tag ArrayList TagSentence The Tools Example application splits an input paragraph into sentences, tokenizes each sentence, and then POS tags that sentence by using the Tag method. Here, we see the results on the first few sentences of G. K. Chesterton's novel, The Man Who Was Thursday. Each token is followed by a '/' character, and then the tag assigned to it by the maximum entropy model as the most likely part of speech. The OpenNLP chunker tool will group the tokens of a sentence into larger chunks, each chunk corresponding to a syntactic unit such as a noun phrase or a verb phrase. This is the next step on the way to full parsing, but it could also be useful in itself when looking for units of meaning in a sentence larger than the individual words. To perform the chunking task, a POS tagged set of tokens is required. The EnglishTreebankChunker class has a Chunk method that takes in the string array of tokens and the string array of POS tags that we generated by calling the POS tagger, and returns a third string array, again with one entry for each token. This array requires some interpretation for it to be of use. The strings it contains begin either with "B-", indicating that this token begins a chunk, or "I-", indicating that the token is inside a chunk but is not the beginning of it. After this prefix is a Penn Treebank tag indicating the type of chunk that the token belongs to: EnglishTreebankChunker Chunk ADJP Adjective Phrase PP Prepositional Phrase ADVP Adverb Phrase PRT Particle CONJP Conjunction Phrase SBAR Clause introduced by a subordinating conjunction INTJ Interjection UCP Unlike Coordinated Phrase LST List marker VP Verb Phrase NP Noun Phrase The EnglishTreebankChunker class also has a GetChunks method, which will return the whole sentence as a formatted string, with the chunks indicated by square brackets. This can be called as follows: GetChunks using OpenNLP.Tools.Chunker; EnglishTreebankChunker chunker = new EnglishTreebankChunker(mModelPath + "EnglishChunk.nbin"); string formattedSentence = chunker.GetChunks(tokens, tags); The Tools Example application uses the POS-tagging code to generate the string arrays of tokens and tags, and then passes them to the chunker. The result shows the POS tags indicated as before, but with the chunks shown by square-bracketed sections in the output sentences. Producing a full parse tree is a task that builds on the NLP algorithms we have covered up until now, but which goes further in grouping the chunked phrases into a tree diagram that illustrates the structure of the sentence. The full parse algorithms implemented by the OpenNLP library use the sentence splitting and tokenizing steps, but perform the POS-tagging and chunking as part of a separate but related procedure driven by the models in the "Parser" subfolder of the "Models" folder. The full parse POS-tagging step uses a tag lookup list, found in the tagdict file. tagdict The full parser is invoked by creating an object from the EnglishTreebankParser class, and then calling the DoParse method: EnglishTreebankParser DoParse using OpenNLP.Tools.Parser; EnglishTreebankParser parser = new EnglishTreebankParser(mModelPath, true, false); Parse sentenceParse = parser.DoParse(sentence); There are many constructors for the EnglishTreebankParser class, but one of the simplest takes three arguments: the path to the Models folder, and two boolean flags: the first to indicate if we are using the tag lookup list, and the second to indicate if the tag lookup list is case sensitive or not. The DoParse method also has a number of overloads, taking in either a single sentence, or a string array of sentences, and also optionally allowing you to request more than one of the top ranked parse trees (ranked with the most probable parse tree first). The simple version of the DoParse method takes in a single sentence, and returns an object of type OpenNLP.Tools.Parser.Parse. This object is the root in a tree of Parse objects representing the best guess parse of the sentence. The tree can be traversed by using the Parse object's GetChildren() method and the Parent property. The Penn Treebank tag of each parse node is found in the Type property, except for when the node represents one of the tokens in the sentence - in this case, the Type property will equal MaximumEntropyParser.TokenNode. The Span property indicates the section of the sentence to which the parse node corresponds. This property is of type OpenNLP.Tools.Util.Span, and has the Start and End properties indicating the characters of the portion of the sentence that the parse node represents. OpenNLP.Tools.Parser.Parse Parse GetChildren() Parent Type MaximumEntropyParser.TokenNode Span OpenNLP.Tools.Util.Span Start End The Parse Tree demo application shows how this Parse structure can be traversed and mapped onto a Lithium graph control, generating a graphical representation of the parse tree. The work is kicked off by the ShowParse() method of the MainForm class. This calls the recursive AddChildNodes() method to build the graph. ShowParse() AddChildNodes() The Tools Example, meanwhile, uses the built-in Show() method of the root Parse object to produce a textual representation of the parse graph: Show() "Name finding" is the term used by the OpenNLP library to refer to the identification of classes of entities within the sentence - for example, people's names, locations, dates, and so on. The name finder can find up to seven different types of entities, represented by the seven maximum entropy model files in the NameFind subfolder - date, location, money, organization, percentage, person, and time. It would, of course, be possible to train new models using the SharpEntropy library, to find other classes of entities. Since this algorithm is dependent on the use of training data, and there are many, many tokens that might come into a category such as "person" or "location", it is far from foolproof. The name finding function is invoked by first creating an object of type OpenNLP.Tools.NameFind.EnglishNameFinder, and then passing it the path to the NameFind subfolder containing the name finding maximum entropy models. Then, call the GetNames() method, passing in a string array of entity types to look for, and the input sentence. OpenNLP.Tools.NameFind.EnglishNameFinder GetNames() using OpenNLP.Tools.NameFind; EnglishNameFinder nameFinder = new EnglishNameFinder(mModelPath + "namefind\\"); string[] models = new string[] {"date", "location", "money", "organization", "percentage", "person", "time"}; string formattedSentence = mameFinder.GetNames(models, sentence); The result is a formatted sentence with XML-like tags indicating where entities have been found. It is also possible to pass a Parse object, the root of a parse tree structure generated by the EnglishTreebankParser, rather than a string sentence. This will insert tags into the parse structure showing the entities found by the Name Finder. My C# conversion of the OpenNLP library provides a set of tools that make some important natural language processing tasks simple to perform. The demo applications illustrate how easy it is to invoke the library's classes and get good results quickly. The library does rely on holding large maximum entropy model data files in memory, so the more complicated NLP tasks (full parsing and name finding) are memory-intensive. On machines with plenty of memory, performance is impressive: a 3.4 Ghz Pentium IV machine with 2 GB of RAM loaded the parse data into memory in 12 seconds. Querying the model once loaded by passing sentence data to it produced almost instantaneous parse results. Work on the Java OpenNLP library is ongoing. The C# version now has a coreference tool and its development is also active, at the SharpNLP Project on CodePlex. Investigations into speedy ways of retrieving MaxEnt model data from disk rather than holding data in memory also.
http://www.codeproject.com/Articles/12109/Statistical-parsing-of-English-sentences?fid=229482&df=90&mpp=10&sort=Position&spc=None&select=3767500&tid=3721763
CC-MAIN-2016-26
refinedweb
3,572
52.29
When I typed the code below in app/controllers/application_controller.rb: class ApplicationController < ActionController::Base protect_from_forgery with: :exception def hello render html: "hello, world!" end end and the code below in config/routes.rb: Rails.application.routes.draw do root 'application#hello' end The root route still returns the default Rails page while I was expecting it would return "hello, world!". Please help me with this small issue. I just built a test app and used your code above...it worked great! I went to and successfully got the "hello, world!" message. What version of rails are you using? What happens when you run rake routes (or rails routes if you're on 5+) from the command line? Mine looks like this: $ rake routes Prefix Verb URI Pattern Controller#Action root GET / application#hello Note - you might need to restart your rails server depending on how you have everything set up but if your server is running on your laptop, the reboot should have handled that.
http://m.dlxedu.com/m/askdetail/3/629406036e54032947a36d013eb8af4f.html
CC-MAIN-2018-47
refinedweb
166
65.93
Visual C++ Program Manager I. As overdue as this post is, let's just jump in. In my first 4 installments, I focused on the different ways you could access native functionality from managed code. In this post, I will flip the actors around and investigate how to expose managed functionality to native clients. The first thing to note is that COM Interop can always solve this problem. Using very little work, you can take a managed assembly and use built-in framework tools to generate a COM shim for native C++ (or VB) clients. Of course that means your calling code has to go through COM and if this is solely for the purpose of interop and you don't need to use COM as your component technology then the performance cost is absolutely not worth it. Thus, once again, C++/CLI will save us :-) In our little story, I have a HelloWorld C# type that looks like this: public class HelloWorld { private static int Counter = 0; public void Speak() MessageBox.Show("Hello World #" + Counter++); } Now we have a C++ client that wants to access this functionality (apparently, this user doesn't know that MessageBox exists as a purely native API). The simplest way to access it is to compile the file where the calling code is using /clr and then instantiating this object and calling the method. Super easy. Hold on, the client just called and said the code must remain 100% native. Why? Who knows :-) In this case, we'll have to create an inverted wrapper that provides a purely native interface. Here we go… The execution is simple: build a new DLL that is compiled /clr but that exposes a native class as opposed to a reference class (remember, C++/CLI preserves native semantics and automagically exports things correctly). // NativeHello.h #ifdef _MANAGED #using <HelloWorld.dll> #include <vcclr.h> #else #include <stddef.h> #endif class NativeHello private: gcroot<HelloWorld^> hw; intptr_t hw; public: __declspec(dllexport) NativeHello(); __declspec(dllexport) ~NativeHello(); __declspec(dllexport) void Speak(); With the following implementation. NativeHello::NativeHello() // initialize managed hello world hw = gcnew HelloWorld(); NativeHello::~NativeHello() // nothing to do :) void NativeHello::Speak() hw->Speak(); Let's go through each part of this in turn. At the top, I enclosed two statements within a check to see if we are compiling as managed (i.e. with /clr). The #using statement is essentially the equivalent of #import for COM. For C# programmers out there, this statement is equivalent to adding a reference to an assembly. The #include statement introduces the gcroot abstraction, which I'll talk about in a second. Now why did I enclose these statements in #ifdef _MANAGED? Our goal is to create a DLL that can be accessed by a purely native client and unfortunately in the native world, libraries do not (exactly) describes themselves and we need to use a header file as the descriptor. When a native client includes our native wrapper header file, the code enclosed within the _MANAGED block will be ignored. This is necessary since these statements only make sense for managed compilands. Luckily, the native client only needs to know about the types/functions we're exporting and hiding these statements has no ill effect. The #else clause adds an #include for intptr_t mentioned below. Our wrapper type is then declared with a private member called hw, which is of type gcroot< HelloWorld^>. In the converse example from my previous posts, we simply embedded a native pointer as a private member. The fact is, you can't have a handle embedded in a native type so the gcroot template creates an (seamless) indirection by using the BCL's GCHandle value type, which enables the native code to hold a managed object and prevents the CLR's garbage collection of the object. However, this template only makes sense when we're compiling managed. Thus, in the case of a native includer, the gcroot member is stored as a simple intptr_t, which has the same size as gcroot on any platform. Of course, we need to start exporting some real functionality! The constructor, destructor and the Speak method are all exported in the traditional native manner using __declspec(dllexport). None of these prototypes expose the managed implementation as is our goal. In other words, here is the view of the wrapper class from a native client. intptr_t hw; NativeHello(); ~NativeHello(); void Speak(); Voila. We now have a wrapper for purely native clients that are unable to use /clr (VC6 clients I presume!). Of course, you should make sure these clients use the same compiler, unless you want to open the way for the most obscure bugs on the planet, nay, in the galaxy(mixing CRTs leads to the dark side). If you find yourself needing this type of wrapper often, you should go take a peek at Paul DiLascia's generic version of this sample that generates wrappers for any managed type.. I apologize for the long delay for this section (although I suppose my average posting frequency is already pretty low), but I was on a much needed vacation. I finished the last chapter with a brief mention of what I would talk about now, which is the native support for interop that C++ provides. In a sense, I hope this is going to appear to be the simplest method even though I will introduce a few new concepts and use C++/CLI, which adds new language constructs to C++ in order to express .NET semantics (e.g. garbage collected types). As always, let us reprise our original HelloWorld example. I'm going to include it again for sake of making this post depend as little as possible on the previous ones. // HelloWorld.h #pragma once class __declspec(dllexport) HelloWorld { public: HelloWorld(); ~HelloWorld(); void SayThis(wchar_t *phrase); }; // HelloWorld.cpp void HelloWorld::SayThis(wchar_t *phrase) MessageBox(NULL, phrase, L"Hello World Says", MB_OK); Our goal is to access this type from .NET. As it stands, this piece of code already compiles into a native DLL. The question that stands before us first is what clients will access this code from now on. In other words, are we replacing all existing client code of this DLL with managed code or are we going to maintain some purely native clients. In the first case, we can write our wrapper code directly into the DLL and compile it into a managed assembly (with native code backing it). In the second case, we need to create a second DLL that will be a native client to this one while publishing a managed interface for .NET clients. It is the latter case that we are going to jump into now. The first thing to do is to create a new CLR project, which we can do with a wizard (look under the Visual C++ > CLR node in the New Project dialog) or simply taking a blank slate and making the project compile with the /clr switch. This switch is the cornerstone of this entire scenario. If you remember the first part in this series, we showed how the C++ compiler is able to generate MSIL and furthermore, it can generate a process image with both a managed and a native section (the only compiler capable of doing so I might add). We have yet to really lay down the bricks for our wrapper so let's make a naïve wrapper for HelloWorld now. // cppcliwrapper.h #include "..\interop101\helloworld.h" namespace cppcliwrapper { class ManagedHelloWorld { private: HelloWorld hw; public: ManagedHelloWorld(); ~ManagedHelloWorld(); void SayThis(wchar_t *phrase); }; } This piece of code is a native wrapper around our native type using traditional OO encapsulation. Even though this piece of code will compile into MSIL, it does not solve our original problem. Why is that? It's because we're still dealing with a native type. In other words, the ManagedHelloWorld class still obeys the rules of native semantics, namely the fact that it must live on the native heap. Managed languages like C# have no knowledge of the native heap and their new operator only instantiates objects into the CLR's heap. We need to make this wrapper a managed type, which will have the same semantics as a class in C#. Enter C++/CLI. With these additions to the language, we can create two new types of classes: managed value and reference types (the difference is mainly in the way they are implicitly copied). For our wrapper, we simply need to change its declaration from class to ref class. Once we compile the resulting code, we get a pivotal error. error C4368: cannot define 'hw' as a member of managed 'ManagedHelloWorld': mixed types are not supported What could this possibly mean? This error is actually directly related to the problem we described just above. In order to be a proper managed reference type that C# and other managed languages can instantiate, we cannot encapsulate native members. Indeed, our wrapper cannot live on the CLR's managed heap as it contains a member that can only live on the native heap. We can resolve this issue by encapsulating a pointer to our native type. Thus we have the following wrapper code. ref class ManagedHelloWorld private: HelloWorld *hw; ManagedHelloWorld(); ~ManagedHelloWorld(); Only three things remain in order to make this wrapper usable. The first is to make it public in accordance with .NET accessibility rules. The second is to change the interface of SayThis such that it uses a managed string. The third is to include the implementation! So here it goes. // cppcliwrapper.cpp #include "cppcliwrapper.h" #include "marshal.h" using namespace cppcliwrapper; ManagedHelloWorld::ManagedHelloWorld() : hw(new HelloWorld()) ManagedHelloWorld::~ManagedHelloWorld() delete hw; void ManagedHelloWorld::SayThis(System::String^ phrase) hw->SayThis(marshal::to<wchar_t*>(phrase)); There are two notable elements we have introduced in this final piece code, the managed handle and data marshalling. The handle or "hat" (or "accent circonflexe" even) is part of the C++/CLI language. It represents a pointer to a managed object. Other languages like Java, C# and VB don't use anything like this as they no longer have native semantics. However C++ needs to differentiate between the stack, the native heap and the managed heap and it does so using * and ^. Data marshalling is a huge topic and can eventually become one of the more complex things you have to manage when working with interop. In this example, we need to convert a managed String into a native pointer to wchar_t. In order to do this, a great pattern is to create a library of static template functions, which thus remain stateless and help maintain a certain level of consistency. In this example, we created the following functions: namespace marshal { template <typename T> static T to(System::String^ str) } template<> static wchar_t* to(System::String^ str) pin_ptr<const wchar_t> cpwc = PtrToStringChars(str); int len = str->Length + 1; wchar_t* pwc = new wchar_t[len]; wcscpy_s(pwc, len, cpwc); return pwc; After this is all said and done, we compile our code into an assembly that 3rd party .NET clients can use as if it were written in C#. So here is our resulting client code, which is eerily similar to the COM example. using System; using System.Text; using cppcliwrapper; namespace CSharpDirectCaller class Program { static void Main(string[] args) { ManagedHelloWorld mhw = new ManagedHelloWorld(); mhw.SayThis("I'm a C# application calling native code via C++ interop!"); } } I have a lot more to say about this, and I promised a performance comparison as well as a 5th part describing doing this in reverse. My next post should not be so long in the making. CHelloWorld() : hw(new HelloWorld()) { } ~CHelloWorld() delete hw; DECLARE_PROTECT_FINAL_CONSTRUCT() HRESULT FinalConstruct() return S_OK; void FinalRelease() HelloWorld *hw; // The original encapsulated native HelloWorld STDMETHOD(SayThis)(BSTR phrase); // The exported method The COM object implementation.Collections.Generic; using comwrapper; // The COM DLL we created namespace CSharpComCaller!. It's funny how often the people within our team (myself included) take certain things for granted. We have provided a great way to bridge the gap between native and managed code with C++/CLI yet I am continually surprised by how little information has been successfully conveyed. I posted slides from the talk I gave last month on this topic, however it lacks the most basic examples. To this effect, I intend to write up a few posts with some simple samples to boil down the basic concepts behind what we have dubbed C++ Interop. Let's start with "legacy" native Win32 code. The following piece of code defines a simple type that is exported from a dll. // HelloWorld.h #pragma once class __declspec(dllexport) HelloWorld { public: }; The implementation of the SayThis method just brings up a message box as follows: void HelloWorld::SayThis(wchar_t *phrase) MessageBox(NULL, phrase, L"Hello World Says", MB_OK); } Once this code is compiled into a dll, our goal is to instantiate HelloWorld and invoke its one and only method. A traditional native client would do it by including the header we defined above as follows: #include "..\interop101\helloworld.h" int wmain() HelloWorld hw; hw.SayThis(L"I'm a native client"); return 0; Alright, now that I've reminded what native code looks like, we can talk interop. In this little educational series, I intend to demonstrate 4 major scenarios. Today let's look at the first and simplest scenario: using C++/CLI. using namespace System; int main(array<System::String ^> ^args) hw.SayThis(L"I'm a managed C++ client"); Hmmm… Looks virtually identical… This what we refer to when we talk about IJW or "It Just Works". In other words, all we had to do here was throw the /clr switch on the original code and the compiler generated MSIL instead of x86 assembly. If we delve into the MSIL, we see that transitions to native code (there are three in this case, can you spot them?) are automagically handled by the compiler. .method assembly static int32 main(string[] args) cil managed // Code size 51 (0x33) .maxstack 2 .locals ([0] int32 V_0, [1] int32 V_1, [2] valuetype HelloWorld hw) IL_0000: ldc.i4.0 IL_0001: stloc.0 IL_0002: ldloca.s hw IL_0004: call valuetype HelloWorld* modopt([mscorlib]System.Runtime.CompilerServices.CallConvThiscall) 'HelloWorld.{ctor}'(valuetype HelloWorld* modopt([mscorlib]System.Runtime.CompilerServices.IsConst) modopt([mscorlib]System.Runtime.CompilerServices.IsConst)) IL_0009: pop .try { IL_000a: ldloca.s hw IL_000c: ldsflda valuetype '<CppImplementationDetails>'.$ArrayType$$$BY0BJ@$$CB_W modopt([mscorlib]System.Runtime.CompilerServices.IsConst) '?A0x783d98d5.unnamed-global-0' IL_0011: call void modopt([mscorlib]System.Runtime.CompilerServices.CallConvThiscall) HelloWorld.SayThis(valuetype HelloWorld* modopt([mscorlib]System.Runtime.CompilerServices.IsConst) modopt([mscorlib]System.Runtime.CompilerServices.IsConst), char*) IL_0016: ldc.i4.0 IL_0017: stloc.1 IL_0018: leave.s IL_0028 } // end .try fault IL_001a: ldftn20: ldloca.s hw IL_0022: call void ___CxxCallUnwindDtor(method void *(void*), void*) IL_0027: endfinally } // end handler IL_0028: ldloca.s hw IL_002a: call2f: ldloc.1 IL_0030: stloc.0 IL_0031: ldloc.0 IL_0032: ret } // end of method 'Global Functions'::main Voila. Short and sweet this time. Next example will be far more useful for C# clients :) The++.. I. A ' iterate through a file and write out non-empty lines into a new file Using sr As New IO.StreamReader(filepath) ' While End Using End Using People often wonder about how we deal with Keyboard shortcuts in Visual Studio. There are a couple of things to know and I'll throw in a neat macro too. When you run Visual Studio for the first time, you will get a prompt asking which default settings you wish to use. We created these defaults to accommodate developers have become accustomed to a certain IDE layout and we don't want to impose a new one. Therefore we ship VS with 7 default profiles corresponding to types of development activities: J#, C#, VB, C++, Web, Team and General (kind of a default for defaults). These profiles encode things like the window layout, options, help filters, as well as keyboard shortcuts. If you are accustomed to VC6 shortcuts, you'll probably be most comfortable in the C++ profile. On the other hand, if you're coming from the C#/VB world, you might prefer to stick with their settings. Of course, the elite among you will care nothing for these imposed "defaults" and I'll be happy to show you how you can set up your own shortcuts. There is a single location for all your shortcut management needs, the Keyboard options dialog. It is located within the IDE options dialog (Tools\Options...). You can find it under the top-level Environment node. I like to say that it's a highly functional dialog stuck in a poor UI. Here is a picture of it (the red boxes are my creative addition). As you can see, it looks quite dense so I have highlighted what I consider the two features of this dialog everyone should know (I leave the rest up to the reader). The first box is a substring search filter to find any command which can be assigned a shortcut, which is great when you can't remember what the shortcut for a command is. The second box is somewhat the compliment of the first as it allows you to do a reverse lookup of a shortcut. In other words, given a shortcut, it will display what command(s) it is assigned to. Over time, some of my colleagues decided it would be nice to query Visual Studio to return the list of all assigned shortcuts (in order to print it out or test it or something…). This seemed like an interesting opportunity to flex our extensibility model and write yet another macro. First, let's define a couple structures for this purpose. Structure VSShortcut Public Scope As String Public KeyCombo As String Sub New(ByVal fullshortcut As String) Scope = fullshortcut.Split("::").GetValue(0) KeyCombo = fullshortcut.Split("::").GetValue(2) End Sub End Structure Structure VSCommand Public Name As String Public Category As String Public Shortcuts As List(Of VSShortcut) Sub New(ByVal fullname As String, ByVal keys As Array) Name = fullname Category = fullname.Split(".").GetValue(0) Shortcuts = New List(Of VSShortcut)(5) For Each key As String In keys Shortcuts.Add(New VSShortcut(key)) These two structures are pretty self-explanatory, they represent a shortcut key combination and a command respectively. We can now iterate through the list of all commands and generate a list of commands with assigned shortcuts. Sub GetAssignedCommands(ByRef CommandList As List(Of VSCommand)) Dim keys As System.Array ' iterate through each possible command in VS For Each item As EnvDTE.Command In DTE.Commands If (item.Name <> Nothing) And (item.Name <> "") Then ' get the array of shortcuts for this command keys = item.Bindings ' add command only if there is at least a shortcut assigned to it If keys.Length > 0 Then CommandList.Add(New VSCommand(item.Name, keys)) End If End If At this point, we simply need to print this out. It turns out there's a quick and dirty way of getting some output from a macro in VS. The Output window. Here is the relevant code to spit out some XML (HTML might be more appropriate for a print-out). Sub WriteStringToOutput(ByRef s As String) Dim pane As OutputWindowPane = GetOutputWindowPane("Commands") pane.OutputString(s) Sub GenerateShortcutsInXML(ByRef CommandList As List(Of VSCommand), ByRef XMLDoc As XmlDocument) Dim commandNode As XmlElement Dim shortcutNode As XmlElement Dim shortcutText As XmlText Dim rootNode As XmlElement ' xml preamble XMLDoc.CreateXmlDeclaration("1.0", "utf-8", Nothing) XMLDoc.InsertBefore(XMLDoc.CreateXmlDeclaration("1.0", "utf-8", Nothing), XMLDoc.DocumentElement) rootNode = xmldoc.CreateElement("commands") XMLDoc.AppendChild(rootNode) ' iterate through commands For Each command As VSCommand In CommandList commandNode = XMLDoc.CreateElement("command") commandNode.SetAttribute("name", command.Name) For Each shortcut As VSShortcut In command.Shortcuts shortcutNode = XMLDoc.CreateElement("shortcut") shortcutNode.SetAttribute("scope", shortcut.Scope) shortcutNode.AppendChild(XMLDoc.CreateTextNode(shortcut.KeyCombo)) commandNode.AppendChild(shortcutNode) rootNode.AppendChild(commandNode) And finally, the glue that keeps it all together. Sub OutputShortcutsAsXml() Dim cmdList As List(Of VSCommand) = New List(Of VSCommand) Dim xmlDoc As XmlDocument = New XmlDocument() GetAssignedCommands(cmdList) GenerateShortcutsInXML(cmdList, xmlDoc) ' generating the output Dim sb As StringBuilder = New StringBuilder() Dim writer As XmlTextWriter = New XmlTextWriter(New StringWriter(sb)) writer.Formatting = Formatting.Indented writer.Indentation = 4 writer.IndentChar = " " xmlDoc.WriteTo(writer) writer.Flush() writer.Close() WriteStringToOutput(sb.ToString()) It turns out Word 2007 supports blogging. This will be my grand excuse to start blogging again.
http://blogs.msdn.com/borisj/default.aspx
crawl-002
refinedweb
3,396
55.24
Little great things about Visual Studio 2019 Mads A few days ago, we announced the general availability of Visual Studio 2019. But I’ve been using Visual Studio 2019 exclusively since the first internal build – long before the release of Preview 1 in December of 2018. During this time, there has been a lot of little features that have put a smile on my face and made me more productive. I want to share a few of them with you since they are not all obvious and some require you to change some settings. Let’s dive in. Clean solution load When a solution is closed, its state is saved so that next time you open it, Visual Studio can restore the collapsed/expanded state of projects and folders in Solution Explorer and reopen the documents that were left open. That’s great but I prefer a clean slate when I open solutions – no files open and all the tree nodes collapsed in Solution Explorer. I wrote the Clean Solution extension to provide this behavior in previous version of Visual Studio. This feature is now native to Visual Studio 2019 and can be enabled with two separate checkboxes. Go to search (Ctrl+Q) and type in “load” to find the Projects and Solutions > General options page. Uncheck both the Reopen documents on solution load and Restore Solution Explorer project hierarchy on solution load checkboxes. An added benefit from unchecking these two checkboxes is that solutions will load faster too, because of the eliminated overhead from restoring state. Win-win. Git pull from shortcut I do a lot of work with GitHub repos and I often take pull requests from people. That means I must make sure to do a git pull before I make any subsequent commits. But, as it turns out repeatedly, this is something I tend to forget. The result is that I end up with merge conflicts and other nuisances. The only way to do git pull in the past was to either use Team Explorer, the command line, or an external tool. What I really wanted was a keyboard shortcut from within Visual Studio that did it for me. Previously, Team Explorer’s pull command was not a command you could assign keyboard shortcuts to but now it is. Go to search (Ctrl+Q) and type “keyboard” to find the Environment > Keyboard options page. From there, find the Team.Git.Pull command from the list. Then assign any shortcut to it and hit the OK button. I chose to use Ctrl+Shift+P. To automatically perform a git pull upon solution load, try out the free Git Pull extension. Code Cleanup for C# Keeping source code neatly formatted and ensuring coding styles are consistent is something I’ve never been good at. The new Code Cleanup feature is a huge help in keeping my code neat and tidy since I have configured it to run all the fixers by default. To do that, go to the Code Cleanup menu sitting in the bottom margin of the editor window and click Configure Code Cleanup. In the dialog, select all the fixers one by one from the bottom pane and hit the up-arrow button to move them up into the top. Then hit OK. Now all fixers will run every time you perform a Code Cleanup. Simply hit Ctrl+K, Ctrl+E to execute. The result is a nicely formatted document with a bunch of coding style rules applied, such as added missing braces and modifiers. Voila! IntelliCode IntelliCode is a new feature that augments the IntelliSense completions based on the context you’re in using advanced machine learning algorithms. That proves useful for many scenarios including when you are exploring new interfaces or APIs. I write a lot of Visual Studio extensions and the API surface is so big that there are parts of it I have never used. When I’m exploring a new part of the Visual Studio API, I find it very helpful to have IntelliCode guide me through how to use it. To enable this powerful feature, you can download IntelliCode from the Visual Studio Marketplace and install the extension. IntelliCode works for C#, C++ and XAML. See content of Clipboard Ring Every time you copy (Ctrl+C) something in Visual Studio, it is being stored in the Clipboard Ring. Hitting Ctrl+Shift+V allows you to cycle through the items in the Clipboard ring and paste the item you select. I find it very useful to keep multiple things in the clipboard at once and then paste the various items to specific locations.. They show up as suggestions in the light bulb and include moving members to interface or base class, adjusting namespaces to match folder structure, convert foreach-loops to Linq queries, and a lot more. To learn more about the new refactorings and other C# features in Visual Studio 2019, check out this post on the .NET blog. Git Stash Having the ability to stash away some work for future use is super helpful. Git Stash is what gives me that ability without having to create a new branch. If you’re familiar with TFS, you can think of Git Stash as a shelveset. The best part is that I can manage all my stashes inside the Team Explorer window. They are easy to create and apply, and I’ve been using them a lot more after Visual Studio now natively supports them. Try Visual Studio 2019 These were just a few of the many small improvements found throughout Visual Studio 2019 that I find particularly useful. Please share any tips or improvements you’ve found helpful in the comments below!
https://devblogs.microsoft.com/visualstudio/little-great-things-about-visual-studio-2019/
CC-MAIN-2019-22
refinedweb
952
69.92
Opened 9 years ago Closed 8 years ago Last modified 8 years ago #3012 closed Bugs (fixed) [function] Test failures with GCC 4.3/4.4 in C++0x mode Description If you run the function regression tests in GCC 4.3/4.4 in C++0x mode, you get a couple of failures as a result of 'using namespace' declarations in the tests making boost::function and std::function ambiguous: The attached patch fixes the failures. Attachments (1) Change History (4) Changed 9 years ago by comment:1 Changed 8 years ago by comment:2 Changed 8 years ago by comment:3 Changed 8 years ago by . ........ Looks like VC10 Beta 1 fails the test for the same reason:
https://svn.boost.org/trac10/ticket/3012
CC-MAIN-2017-47
refinedweb
120
66.67
Wraps the TF 1.x function fn into a graph function. tf.compat.v1.wrap_function( fn, signature, name=None ) Used in the notebooks The python function fn will be called once with symbolic arguments specified in the signature, traced, and turned into a graph function. Any variables created by fn will be owned by the object returned by wrap_function. The resulting graph function can be called with tensors which match the signature. def f(x, do_add): v = tf.Variable(5.0) if do_add: op = v.assign_add(x) else: op = v.assign_sub(x) with tf.control_dependencies([op]): return v.read_value() f_add = tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), True]) assert float(f_add(1.0)) == 6.0 assert float(f_add(1.0)) == 7.0 # Can call tf.compat.v1.wrap_function again to get a new trace, a new set # of variables, and possibly different non-template arguments. f_sub= tf.compat.v1.wrap_function(f, [tf.TensorSpec((), tf.float32), False]) assert float(f_sub(1.0)) == 4.0 assert float(f_sub(1.0)) == 3.0 Both tf.compat.v1.wrap_function and tf.function create a callable TensorFlow graph. But while tf.function runs all stateful operations (e.g. tf.print) and sequences operations to provide the same semantics as eager execution, wrap_function is closer to the behavior of session.run in TensorFlow 1.x. It will not run any operations unless they are required to compute the function's outputs, either through a data dependency or a control dependency. Nor will it sequence operations. Unlike tf.function, wrap_function will only trace the Python function once. As with placeholders in TF 1.x, shapes and dtypes must be provided to wrap_function's signature argument. Since it is only traced once, variables and state may be created inside the function and owned by the function wrapper object.
https://www.tensorflow.org/api_docs/python/tf/compat/v1/wrap_function?hl=pt-br
CC-MAIN-2021-31
refinedweb
302
54.9
Calculating Task Periodic Rate I'm having a complete brain seizure here with what should be a simple math problem. I need to set the periodic rate of an embedded task so that it performs certain things at a particular rate. E.g if the task has to perform 4 things at, say, 6Hz, 3Hz, 2Hz and 4Hz how do I calculate taht the task must run at 12Hz?? (Accuracy is not super important here) -- No, this is not homework. A student could probably figure this out in less time than it has taken me to type the question in... Brain dead today Friday, June 18, 2004 You have to find the lowest common multiple. Take multiples of each number until you find one that all share. DJ Friday, June 18, 2004 6Hz, 3Hz, 2Hz and 4Hz Step 1 6 = 2x3 3 = 3 2 = 2 4 = 2^2 Step 2 n = 2^2 x 3 And a note: if a process has a highest frequency F in its spectrum, you need to sample it at 2xF in order to be able to capture all events and reconstruct the original signal (Nyquist criteria/Shannon sampling theorem) Dino Friday, June 18, 2004 Take multiples (starting with 1) of the highest resolution number (6): 6x1%4 = 4, so 6x1 doesn't work. 6x2%4 = 0, so 6x2 (=12) works. Derek Friday, June 18, 2004 Example: 24 and 90 24 = 2^3 x 3 90 = 2 x 3^2 x 5 N = 2^3 x 3^2 x 5 = 360 Oops, I forgot to try all the other numbers from the problem (4, 3, 2). Continuing: 6x1%4 = 4, so no need to keep trying 6x1 6x2%4 = 0 6x2%3 = 0 6x2%2 = 0 ...so 6x2 works. dorks. muppet is now from madebymonkeys.net Friday, June 18, 2004 The direct method for finding lowest common multiple is to use the prime factorization, i.e, 2, 3, 4, 6, have prime factorizations of: 2^1, 3^1, 2^2, 2^1*3^1. Take the primes: 2 and 3 Take the highest exponents present: 2^2, 3^1 multiply = 12. Ryan Anderson Friday, June 18, 2004 But then you have the prime factorization first. I think the best way is via the greatest common denominator, and the use the equation gcd(a, b) * lcm(a, b) = a * b to calculate the lcm. I'd use Euler's method to calculate the gcd. In Python, but close enough to pseudocode for everyone to understand: def gcd(a, b): if b == 0: return a else: return gcd(b, a % b) def lcm(a, b): return a*b/gcd(a, b) For more than two numbers, note that lcm(a, b, c) = lcm(lcm(a, b), c) (and the same for more numbers) For four numbers, you could use: lcm(a, b, c, d) = lcm(lcm(a, b), lcm(c, d)) In your example with the functions defined above, Python gives lcm(lcm(6, 3), lcm(2, 4)) = 12 vrt3 Saturday, June 19, 2004 (errata: That's Euclidean algorithm, not Euler's method) Thanks guys. vrt3 that's pretty much how I'm going to do it since GCD is simple to implement (I have to use c++) and I need to be able to change the number of rates and the rate values on the fly. Brain dead today Saturday, June 19, 2004 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=153814&ixReplies=10
CC-MAIN-2017-47
refinedweb
572
72.8
So, I'm just starting to learn C++ and I'd like to start off by saying thank you to all those that created this site. It's been great and the guides are very helpful. :) During the course of trying out example code for arrays, I ran into an issue where the code doesn't terminate when expected. In fact, it just goes until my computer gives an error response. So my question is: given the code below, which is a copy of what I have entered and is almost a copy/paste from the example, why doesn't it terminate on its own? Thanks in advance. I know this is probably really basic stuff, but I only started yesterday. :)Thanks in advance. I know this is probably really basic stuff, but I only started yesterday. :)Code: #include <iostream> using namespace std; int main() { int x; int y; int array[8][8]; for ( x = 0; x < 8; x++ ) { for ( y = 0; y < 8; y++ ) array[x][y]= x * y; } cout<<"Array Indices:\n"; for ( x = 0; x < 8; x++ ) { for ( y = 0; x < 8; y++ ) cout<<"["<<x<<"]["<<y<<"]="<< array[x][y] <<" "; cout<<"\n"; } cin.get(); }
https://cboard.cprogramming.com/cplusplus-programming/136919-question-about-array-code-printable-thread.html
CC-MAIN-2017-17
refinedweb
196
78.79
exp - exponential function #include <math.h> double exp(double x); The exp() function computes the exponent of x, defined as ex. An application wishing to check for error situations should set errno to 0 before calling exp(). If errno is non-zero on return, or the return value is NaN, an error has occurred. Upon successful completion, exp() returns the exponential value of x. If the correct value would cause overflow, exp() returns HUGE_VAL and sets errno to [ERANGE]. If the correct value would cause underflow, exp() returns 0 and may set errno to [ERANGE]. If x is NaN, NaN is returned and errno may be set to [EDOM]. The exp() function will fail if: - [ERANGE] - The result overflows. The exp() function may fail if: - [EDOM] - The value of x is NaN. - [ERANGE] - The result underflows. No other errors will occur. None. None. None. isnan(), log(), <math.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/exp.html
CC-MAIN-2015-35
refinedweb
155
68.97
Pandas is a very useful data analysis library for Python. It can be very useful for handling large amounts of data. Unfortunately Pandas runs on a single thread, and doesn’t parallelize for you. And if you’re doing lots of computation on lots of data, such as for creating features for Machine Learning, it can be pretty slow depending on what you’re doing. To tackle this problem, you essentially have to break your data into smaller chunks, and compute over them in parallel, making use of the Python multiprocessing library. Let’s say you have a large Pandas DataFrame: import pandas as pd data = pd.DataFrame(...) #Load data And you want to apply() a function to the data like so: def work(x): # Do something to x # return something data = data.apply(work) What you can do is break the DataFrame into smaller chunks using numpy, and use a Pool from the multiprocessing library to do work in parallel on each chunk, like so: import numpy as np from multiprocessing import cpu_count, Parallel cores = cpu_count() #Number of CPU cores on your system partitions = cores #Define as many partitions as you want def parallelize(data, func): data_split = np.array_split(data, partitions) pool = Pool(cores) data = pd.concat(pool.map(func, data_split)) pool.close() pool.join() return data And that’s it. Now you can call parallelize on your DataFrame like so: data = parallelize(data, work); Run it, and watch your system’s CPU utilization shoot up to 100%! And it should finish much faster, depending on how many cores you have. 8 cores should theoretically be 8x faster. Or you could fire up an AWS EC2 instance with 32 cores and run it 32x faster! 8 thoughts on “Parallelize Pandas map() or apply()” Hey, thanks for the explanation Adeel. I want to create a new column in my dataframe df which is generated by applying a function f to an existing column in df. This is embarrassingly parallel i.e. the function is applied to each row individually and independently to produce the new column, so each row is only dependent on itself and not on any other rows. My previous code (before paralellising) was: df[“new_column”] = df[“existing_column”].apply(f) But when I try to apply your code to paralellise, it gets stuck in some infinite loop because it never finishes. I even tried to properly define each sub chunk of the original df and create a test function which explicitly defines what I want to do… still not working. Any tips or help would be greatly appreciated! Thanks # doesnt seem to work import multiprocessing from multiprocessing import Pool num_partitions = 5 num_cores = multiprocessing.cpu_count() # print(num_partitions,num_cores) def parallelize_dataframe(df, func): a,b,c,d,e = np.array_split(df, num_partitions) pool = Pool(num_cores) df = pd.concat(pool.map(func, [a,b,c,d,e])) pool.close() pool.join() return df def test_func(data): data[“new_column”] = data[“existing_column”].apply(f) return data t0 = time.time() test = parallelize_dataframe(df, test_func) t1 = time.time() print(“running in parallel:”,t1-t0,”s”) Hello Killian, Sorry about the delayed reply, been really busy. Hmm, I’m not sure without seeing your dataframe or function “f”. Try the code below. It’s essentially what you pasted, but with a square function that’s used to apply to an existing column, to create the new column. Seems to work fine, and in parallel. BTW, make sure you call parallelize_dataframe() from this if statement: if __name__ == ‘__main__’: Edit: Note: the intermediate print’s will look funny (overlapped). This is a good sign. It means it’s running in parallel. Hi Adeel, Thanks for the explanation. I am planning to read the the large CSV file and convert load the data into Oracle Database table in parallel. Could you guide me or send me the sample code to read the large CSV file as chunks and load into Oracle table in parallel. Thanks, Mark Hi Adeel. My code is entering into an infinite execution and is not yielding anything at all. Kindly contact me @kuldeep.gautam007@gmail.com as I can’t share the details here due to the privacy policy of the organisation I am working with. Thanks. Thank for the previous comments I found them very interesting. I have applied them for text processing, here a little example in github: Hi Adeel the pool.map action throws this exception: * ‘float’ object is not iterable any idea why this may happen? thanks Shay Hello Shay. It seems you’re passing a float to your pool.map(). The first argument should be the function that will be called. The second argument should be a Python list or numpy array, of arguments that are passed to the function through pool.map(). Hi Adeel thanks for your quick response I (think) I am passing the right arguments (name of function and list of DataFrames) and I have no idea where this float appeared from… Shay
http://blog.adeel.io/2016/11/06/parallelize-pandas-map-or-apply/
CC-MAIN-2022-05
refinedweb
826
65.52
HOME HELP PREFERENCES SearchSubjectsFromDates thanks for your suggestions. Actually let me more specific about the trechnical issue here: I have chemistry HTML tutorial I wish to include in a collection for the School. There are example text files which are linked to the nain files via URL. I wish to include these in the collection without having to process them. That is to say I place them in the import directory but do not wish for greenstone to process the example files beyond perhaps recognizing the internal link to these files. The problems are two fold. 1. greenstone are process the files (example_xx.txt files) 2. The original functioning links now fail with the error indicated earlier I will now try your approach but have 1 concern. Does using the -nolink HTMLPlug option means that images will not be moved the assoc directory. Then you will U have to move the graphics content outside greenstone? ----- Original Message ----- From: "John R. McPherson" <jrm21@cs.waikato.ac.nz> To: "desiree' simon" <rjae-1@att.net> Cc: <greenstone@tripath.colosys.net> Sent: Tuesday, September 03, 2002 10:17 PM Subject: Re: How do I specify an internal http link across document > > desiree' simon wrote: > > > I want to be able to http-link one internal document to another. However, > > when I edit the html docs to include links of the forms: > > > > 1. href= > > > > or realtive link > > > > 2. "href="examples/page/page.html" > > > > I am getting the error message > > > > "For reasons beyond our control the internal link you specify does not > > exist". > > > Questions: > > > > given two documents page_1.html and page_2.html. How do I > > specify an internal URL linking page_1 to pape_2? > > Hi, > I can think of a couple of things to try: > > 1) > href= > > this won't work as it is looking for an internet server named "gsdl" > and then /collect on that server. You could try > but I don't think it's a good idea to link to the import directory. > Greenstone can handle internal links... > > 2) If you really want to give hard-coded links, edit your collect.cfg > file so that for HTMLPlug you include a certain option, like: > plugin HTMLPlug -nolinks > This means that greenstone won't do any interpretation of the links > and they will be displayed exactly as they are in the source documents. > > 3) You could use the "-file_is_url" option to HTMLPlug as above. > This is normally used when building a collection from a web mirror, > so the file might be called "" > etc. Internal links work for collections I've built when I mirrored > some of our university pages... > I don't know if it will work in your situation though. Let the list > know if it does! > > Hope this helps, > John McPherson >
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&d=000c01c25436$ede266c0$0801febf-tlclinux-org
CC-MAIN-2017-04
refinedweb
457
74.79
Create a Basic CAP-Based Service You will learn - The difference between @sap/cdsand @sap/cds-dk - How to install @sap/cds-dkglobally - How to initialize a new CAP project - What the basic structure of a CAP-based service looks like - How to supply seed data in CSV form - How to start up a CAP service locally Prerequisites - You should start off in the same place as you were at the end of the previous tutorial – in VS Code, with your teched2019-mission-mock-service-bupaproject still open.. So you have a mock service running, and supporting V2 as well as V4 flavored responses to OData operation requests. Now it’s time to put together a second service that will eventually consume data from (make requests to) this mock service. We’ll use the SAP Cloud Application Programming Model for this second consumer service so we can take advantage of the powerful Core Data Services language to bridge local and remote data sources in service definitions. To keep things simple, the consumer service will be based on the simple bookshop model that you may have seen before, so that you can focus on the consumption parts you’ll eventually add and use. The first step is to create a new app using the cds command line tool which is part of the @sap/cds-dk package. While you’ve already used the @sap/cds package in the preceding tutorials in this mission, it’s been within the context of an individual project directory where @sap/cds was referenced locally. Node.js packages can be installed globally, too, and that’s what you’ll do now with @sap/cds-dk so that the cds command line client is available everywhere. Originally the Node.js package incarnation of CAP was in the form of a single top-level module @sap/cds. Today we also have @sap/cds-dk, where the “dk” refers to “development kit”. This is the package that you’ll want to install to develop with CAP, taking advantage of all the tools that it includes; in parallel there is @sap/cdswhich you can think of as the leaner “runtime” package. Execute the following commands in a command prompt (even one in an integrated terminal within VS Code will do). npm install -g @sap/cds-dk If there’s an older @sap/cdspackage already installed on the machine, you may have to remove it first; if so, you’ll be instructed to do so. To satisfy yourself that the install proceeded successfully, invoke the cds executable with the -v option and check that you get sensible output. Here’s an example of what that might look like (versions may be different): $ cds -v @sap/cds-dk: 1.4.1 @sap/cds: 3.21.0 @sap/cds-compiler: 1.21.1 @sap/cds-foss: 1.1.0 @sap/cds-messaging: 1.5.0 @sap/cds-reflect: 2.9.1 @sap/cds-rest: 1.3.0 @sap/cds-services: 1.22.0 @sap/generator-cds: 2.11.1 Node.js: v10.17.0 home: /home/qmacro/teched2019-mission-mock-service-bupa/node_modules/@sap/cds With your freshly installed cds command line tool, you can now create a new CAP-based project, in the form of a new directory with various things preconfigured. Do this now, in your home directory or another directory where you have write access. To keep things together, we recommend you create this new project directory next to the mock service project directory you created in a previous tutorial in this mission. cds init consumer-app If you’ve used earlier versions of the cds initinvocation you may remember the --modulesswitch. This is now deprecated. See the Start a New Project section of the CAP documentation for more details. This will emit output similar to the following: [cds] - creating new project in ./consumer-app done. Continue with 'cd consumer-app' Find samples on Learn about next steps at When the initialization process finishes, you will have a new consumer-app/ directory which you can now open up in VS Code. Do this either by creating a new top-level window in your running VS Code instance (with File | New Window) and then opening this new directory (with File | Open…), or simply by running the following (if your operating system allows this): code consumer-app You should end up with two VS Code top-level windows, one showing your teched2019-mission-mock-service-bupa project, and the other one showing this new consumer-app project, like this: Initializing the project created a number of empty directories, predominantly the following: app/: for UI artifacts db/: for the database level schema model srv/: for the service definition layer The first thing to do is create a database level definition. Create a file called schema.cds in the db/ directory (you may have seen this conventionally called data-model.cds in the past), and save the following contents into it: namespace my.bookshop; using cuid from '@sap/cds/common'; entity Books { key ID : Integer; title : String; stock : Integer; author : Association to Authors; } entity Authors { key ID : Integer; name : String; books : Association to many Books on books.author = $self; } entity Orders : cuid { book : Association to Books; quantity : Integer; } Now add a service definition, in the form of a new file called service.cds in the srv/ directory, with the following contents (don’t forget to save!): using my.bookshop as my from '../db/schema'; service CatalogService { entity Books as projection on my.Books; entity Authors as projection on my.Authors; entity Orders as projection on my.Orders; } So far so good. In data model definitions, entities are related through associations, which can be either unmanaged (where you have to specify the foreign key and join conditions yourself) or managed. You may have noticed that this line in the db/schema.cds file is highlighted as problematic: using cuid from '@sap/cds/common'; That’s because the resource that’s being referred to is not available. This is the “common” definitions file that’s supplied with all CAP installs, and is to be found within the @sap/cds and @sap/cds-dk packages. This point, then, is a good time to install the dependencies for this project, which are defined in the package.json file in the dependencies section. Within VS Code, open a new terminal (choose “Terminal: Create new Integrated Terminal” from the Command Palette as in previous tutorials in this mission) and run the following: npm install This should complete in a short time, and you should notice a new directory in the project’s root, named node_modules/. This is where the installed dependencies (and their dependencies) have been placed. We’re going to be using SQLite as a persistence layer shortly, so this is also a good point to install the Node.js library for SQLite too. Install this as a “development dependency”, like this: npm install --save-dev sqlite3 The result of this will be that the package will be installed, and recorded as a development dependency in the package.json file (have a look) … this is as opposed to a runtime / production dependency. At the end of this step, the relevant sections in your package.json file should look something like this: "dependencies": { "@sap/cds": "^3", "express": "^4" }, and this: "devDependencies": { "sqlite3": "^4.1.1" } You can cross-reference this list with what npm thinks is installed, with the following command: npm list --depth=0 This should give you a top-level list (i.e. without nested dependencies) of the packages installed in this project. The output should look something like this (version numbers may be different): consumer-app@1.0.0 /home/qmacro/mission-temp/consumer-app ├── @sap/cds@3.21.0 ├── express@4.17.1 └── sqlite3@4.1.1 Seed data can be supplied in the form of CSV files, one for each entity type. This data will be loaded into the appropriate tables at the persistence layer when the cds deploy command is used. Create a new directory csv/ inside the db/ directory, and add three files, named as follows: my.bookshop-Authors.csv my.bookshop-Books.csv my.bookshop-Orders.csv Add the following CSV data sets into each of these corresponding new files: my.bookshop-Authors.csv ID,NAME 42,Douglas Adams 101,Emily Brontë 107,Charlote Brontë 150,Edgar Allen Poe 170,Richard Carpenter my.bookshop-Books.csv ID,TITLE,AUTHOR_ID,STOCK 421,The Hitch Hiker's Guide To The Galaxy,42,1000 427,"Life, The Universe And Everything",42,95 201,Wuthering Heights,101,12 207,Jane Eyre,107,11 251,The Raven,150,333 252,Eleonora,150,555 271,Catweazle,170,22 my.bookshop-Orders.csv ID,BOOK_ID,QUANTITY 7e2f2640-6866-4dcf-8f4d-3027aa831cad,421,15 64e718c9-ff99-47f1-8ca3-950c850777d4,271,9 To effect the loading of this seed data, run the following command in an integrated terminal within your project in VS Code (ensure you’re in the project directory before you do): cds deploy --to sqlite You should see output similar to this: > filling my.bookshop.Authors from db/csv/my.bookshop-Authors.csv > filling my.bookshop.Books from db/csv/my.bookshop-Books.csv > filling my.bookshop.Orders from db/csv/my.bookshop-Orders.csv /> successfully deployed database to ./sqlite.db Now you can start the service up … npm start … and explore the data that’s just been loaded, and the relationships between the items. Here are a few examples: - The book orders: - The authors and their books: - Books that are low on stock: At this point in the mission, you have a mocked SAP S/4HANA Business Partner service supplying address data, and a bookshop style service (to which you’ll eventually add a simple user interface), which will be extended to consume that address data and combine it with the bookshop order information.
https://developers.sap.com/tutorials/cap-cloudsdk-3-basic-service.html
CC-MAIN-2020-40
refinedweb
1,647
53
19 October 2012 05:52 [Source: ICIS news] By Helen Yan ?xml:namespace> SINGAPORE The market is also being weighed down by falling values of feedstock butadiene (BD), they said. On 18 October, BR prices fell by $50/tonne (€39/tonne) week on week to $2,650-2,750/tonne CFR (cost and freight) northeast (NE) Feedstock BD prices have also been under downward pressure in October. In the week ended 12 October, BD prices were assessed at an average of $1,900/tonne CFR NE Asia, down by about $100/tonne from mid-September, ICIS data showed. “We do not expect the [BR] market to improve, given the weak sentiment and poor economic data [in Growth in the world’s second biggest economy has continued to decelerate since the start of the year, after a 7.6% growth in the second quarter and an 8.1% expansion in the first, official data showed. “It looks like the BR market will most probably be flat in the fourth quarter as demand has been slower-than-expected in the region,” another Asian BR producer said. The World Bank has warned of slower economic expansion in The country’s territorial spat with BR is used in the production of tyres for the automotive industry. Chinese buyers have shunned buying Japan-branded vehicles because of the dispute, leading to a plunge in sales of Japanese cars. A number of Chinese BR producers have either shut or cut output in view of the poor market conditions. Dushanzi Petrochemical’s 30,000 tonne/year BR plant at In In The Society of Indian Automobile Manufacturers has tempered its growth expectations for car sales in the current financial year ending March 2013 to 1%-3% from its previous estimate of 9%-10%. “The market is really dull and has been very frustrating as sales of BR have been very slow,” a trader said. (
http://www.icis.com/Articles/2012/10/19/9605378/weak-asia-br-to-persist-on-poor-demand-as-china-growth-slows.html
CC-MAIN-2014-42
refinedweb
317
57.1
Created on 2011-05-06 22:53 by Rodrigo.Ventura, last changed 2011-05-07 12:14 by r.david.murray. This issue is now closed. Consider these two functions: --- def nok(): a = None def f(): if a: a = 1 f() def ok(): a = None def f(): if a: b = 1 f() --- Function ok() executes fine, but function nok() trigger an exception: Traceback (most recent call last): File "pb.py", line 20, in <module> nok() File "pb.py", line 7, in nok f() File "pb.py", line 5, in f if a: UnboundLocalError: local variable 'a' referenced before assignment There is no reason for this to happen Regards, Rodrigo Ventura The reason is that in nok Python sees the assignment to a (a = 1) and determines that the 'a' variable is local to the scope of f, and since the assignment comes after the "if a:" and at that point 'a' has no value, an error is raised. In ok there's no assignment to 'a', so Python assume that 'a' refers to the 'a' variable defined in the outer scope. See also Ezio, thank you for the explanation. Is it possible to access variable a in nok's scope from function f without using the global keyword in f? (so that variable a remains local to nok, rather than global to python) Rodrigo In 3.x, yes (the nonlocal keyword). Not in 2.7, though.
https://bugs.python.org/issue12023
CC-MAIN-2020-34
refinedweb
236
71.24
It is a growing problem with Linux distributions that many packages they ship quickly become out of date, and due to the policies of how the Linux distributions are managed, the packages do not get updates. The only hope for getting a newer version of a package is to wait for the next version of the Linux distribution and hope that they include the version of the package you do want. In practice though, waiting for the next version of a Linux distribution is not something you can usually do, you have to go with what is available at the time, or even perhaps use an older version due to maturity or support concerns. Worse, once you do settle on a specific version of a Linux distribution, you are generally going to be stuck with it for many years to come. The end result is that although you can be lucky and the Linux distribution may at least update a package you need with security fixes, if popular enough, you will be out of luck when it comes to getting bug fixes or general improvements to the package. This has been a particular problem with major Python versions, the Apache web server, and also my own mod_wsgi package for Apache. At this time, many so called long term support (LTS) versions of Linux ship a version of mod_wsgi which is in practice about 5 years old and well over 20 releases behind. That older version, although it may have received one security fix which was made available, has not had other bugs fixed which might have security implications, or which can result in excessive memory use, especially with older Apache versions. The brave new world of Docker offers a solution to this because it makes it easier for users to install their own newer versions of packages which are important to them. It is therefore possible for example to even find official Docker images which provide Python 2.7, 3.2, 3.3 or 3.4, many more than the base Linux version itself offers. The problem with any Docker image which builds its own version of Python however is whether when it is installed it has followed best practices as to the right way to install it which has been developed over the years for the official base Linux versions of the package. There is also the problem of whether all required libraries were installed that modules in the Python standard library actually require. If such libraries aren't present, then modules which require them will simply not be installed when compiling Python from source code, the installation of Python itself will not be aborted. In this blog post I am going to cover some of the key requirements and configuration options which should be used when installing Python in a Docker image so as to make it align with general practice as to what is done by base Linux distributions. virtual environments. I have encountered a number of service providers over the years which have had inferior Python installations which exclude certain modules or prevent the installation of certain third party modules, including the inability to install mod_wsgi. Unfortunately not all service providers seem to care about offering options for users and are simply just wanting to make anything available so they can tick off Python from some list, but not really care how good of an experience they provide for Python users. Required system packages Python is often referred to as 'batteries included'. This means that it provides a large number of Python modules in the standard library for a range of tasks. A number of these modules have a dependency on certain system packages being installed, otherwise that Python module will not be able to be installed. This is further complicated by the fact that Linux distributions will usually split up packages into a runtime package and a developer package. For example, the base Linux system may supply the package allowing you to create a SQLite database and interact with it through a CLI, but it will not by default install the developer package which would allow you to build the 'sqlite3' package included in the Python standard library. What the names of these required system packages are can vary based on the Linux distribution. Often people arrive at the list of what are the minimum packages which would need to be installed by a process of trial and error after seeing what Python packages from the standard library hadn't been installed when compiling Python from source code. A better way is to try and learn from what the Python version provided with the Linux distribution does. On Debian we can do this by using the 'apt-cache show' command to list the dependencies for the Python packages. When we dig into the packages this way we find two key packages. The first of these is the 'python2.7-stdlib' package. This lists the dependencies: Depends: libpython2.7-minimal (= 2.7.9-2), mime-support, libbz2-1.0, libc6 (>= 2.15), libdb5.3, libexpat1 (>= 2.1~beta3), libffi6 (>= 3.0.4), libncursesw5 (>= 5.6+20070908), libreadline6 (>= 6.0), libsqlite3-0 (>= 3.5.9), libssl1.0.0 (>= 1.0.1), libtinfo5 Within the 'python2.7-minimal' package we also find: Depends: libpython2.7-minimal (= 2.7.9-2), zlib1g (>= 1:1.2.0) In these two lists it is the library packages which we are concerned with, as it is for those that we need to ensure that the corresponding developer package is installed so header files are available when compiling any Python modules which require that library. The command we can next use to try and determine what the developer packages are is the 'apt-cache search' command. Take for example the 'zlib1g' package: # apt-cache search --names-only zlib1g zlib1g - compression library - runtime zlib1g-dbg - compression library - development zlib1g-dev - compression library - development The developer package we are interested in here is 'zlib1g-dev', which will include the header files we are looking for. We are not interested in 'zlib1g-dbg' as we do not need the debugging information for doing debugging with a C debugger, so we do not need versions of libraries including symbols. We can therefore go through each of the library packages and see what we can find. For Debian at least, the developer packages we are after have a '-dev' suffix added to the package name in some form. Do note though that the developer packages for some libraries may not have the version number in the package name. This is the case for the SSL libraries for example: # apt-cache search --names-only libssl libssl-ocaml - OCaml bindings for OpenSSL (runtime) libssl-ocaml-dev - OCaml bindings for OpenSSL libssl-dev - Secure Sockets Layer toolkit - development files libssl-doc - Secure Sockets Layer toolkit - development documentation libssl1.0.0 - Secure Sockets Layer toolkit - shared libraries libssl1.0.0-dbg - Secure Sockets Layer toolkit - debug information For this we would use just 'libssl-dev'. Running through all these packages, the list of developer packages we likely need to have installed in order to be satisfied that we can build all Python packages included as part of the Python standard library are: libbz2-1.0 ==> libbz2-dev libc6 ==> libc6-dev libdb5.3 => libdb-dev libexpat1 ==> libexpat1-dev libffi6 ==> libffi-dev libncursesw5 ==> libncursesw5-dev libreadline6 ==> libreadline-dev libsqlite3-0 ==> libsqlite3-dev libssl1.0.0 ==> libssl-dev libtinfo5 ==> libtinfo-dev zlib1g ==> zlib1g-dev Having worked out what developer packages we will likely need for all the possible libraries that modules in the Python standard library may require, we can construct the appropriate command to install them. apt-get install -y libbz2-dev libc6-dev libdb-dev libexpat1-dev \ libffi-dev libncursesw5-dev libreadline-dev libsqlite3-dev libssl-dev \ libtinfo-dev zlib1g-dev --no-install-recommends Note that we only need to list the developer packages. If the base Docker image we used for Debian didn't provide the runtime variant of the packages, the developer packages express a dependency on the runtime package and so they will also be installed. Although we want such hard dependencies, we don't want suggested related packages being installed and so we use the '--no-install-recommends' option to 'apt-get install'. This is done to cut down on the amount of unnecessary packages being installed. Now it may be the case that not all of these may strictly be necessary as the Python module requiring them wouldn't ever be used in the types of applications that we may want to run inside of a Docker container, but once you install Python you can't add in any extra Python module from the Python standard library after the fact. The only solution would be to reinstall Python again. So it is better to err on the side of caution and add everything that the Python package provided with the Linux distribution lists as a dependency. If you wanted to try and double check whether they are required by working out what Python modules in the standard library actually required them, you can consult the 'Modules/Setup.dist' file in the Python source code. This file lists the C based Python extension modules and what libraries they require to be available and linked to the extension module when compiled. For example, the entry in the 'Setup.dist' file for the 'zlib' Python module, which necessitates the availability of the 'zlib1g-dev' package, is: #zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz Configure script options Having worked out what packages we need to install into the Linux operating system itself, we next need to look at what options should be supplied to the Python 'configure' script when building it from source code. For this, we could go search out where the specific Linux operating system maintains their packaging scripts for Python and look at those, but there is actually an easier way. This is because Python itself will save away the options supplied to the 'configure' script and keep them in a file as part of the Python installation. We can either go in and look at that file, or use the 'distutils' module to interrogate the file and tell us what the options were. You will obviously need to have Python installed in the target Linux operating system to work it out. You will also generally need to have both the runtime and developer variants of the Python packages. For Debian for example, you will need to have run: apt-get install python2.7 python2.7-dev The developer package for Python is required as it is that package that contains the file in which the 'configure' args are saved away. With both packages installed, we can now from the Python interpreter do: # python2.7 Python 2.7.9 (default, Mar 1 2015, 12:57:24) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from distutils.sysconfig import get_config_var >>> print get_config_var('CONFIG_ARGS') On Debian and Python 2.7 this yields: '--enable-shared' '--prefix=/usr' '--enable-ipv6' '--enable-unicode=ucs4' '--with-dbmliborder=bdb:gdbm' '--with-system-expat' '--with-system-ffi' '--with-fpectl' 'CC=x86_64-linux-gnu-gcc' 'CFLAGS=-D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security ' 'LDFLAGS=-Wl,-z,relro' There are a couple of key options here to highlight and which need to be separately discussed. These along with their help descriptions from the 'configure' script are: --enable-shared ==> Disable/enable building shared python library --enable-unicode[=ucs[24]] ==> Enable Unicode strings (default is ucs2) Shared Python library When you compile Python from source code, there are three primary by products. There are all the Python modules which are part of the standard library. These may be pure Python modules, or may be implemented as or using C extension modules. The C extension modules are dynamically loadable object files which would only be loaded into a Python application if required. There is then the Python library, which contains all the code which makes up the core of the Python interpreter itself. Finally, there is the Python executable itself, which is run on the main script file for your Python application or when running up an interactive interpreter. For the majority of users, that there is a Python library is irrelevant, as that library would also be statically linked into the Python executable. That the Python library exists is only due to the needs of the subset of users who want to embed the Python interpreter into an existing application. Now there are two ways that embedding of Python may be done. The first is that the Python library would be linked directly with the separate application executable when it is being compiled. The second is that the Python library would be linked with a dynamically loadable object, which would then in turn be loaded dynamically into the separate application. For the case of linking the Python library with the separate application, static linking of the library can be used. Where creating a dynamically loadable object which needs the Python library, things get a bit trickier as trying to link a library statically with a dynamically loadable object will not always work or can cause problems at runtime. This is a problem which used to plague the mod_python module for Apache many years ago. All Linux distributions would only ship a static variant of the Python library. Back then everything in the Linux world was 32 bit. For the 32 bit architecture at the time, static linking of the Python library into the dynamically loadable mod_python module for Apache would work and it would run okay, but linking statically had impacts on the memory use of the Python web application process. The issue in this case was that because it was a static library being embedded within the module and the object code also wasn't being compiled as position independent code, the linker had to do a whole lot of fix ups to allow the static code to run at whatever location it was being loaded. This had the consequence of effectively creating a separate copy of the library in memory for each process. Even back then the static Python library was about 5-10MB in size, the result being that the web application processes were about that much bigger in their memory usage than they needed to be. This resulted in mod_python getting a bit of a reputation of being a memory hog, when part of the problem was that the Python installation was only providing a static library. I will grant that the memory issues with mod_python weren't just due to this. The mod_python module did have other design problems which caused excessive memory usage as well, plus Apache itself was causing some of it through how it was designed at the time or how some of Apache's internal APIs used by mod_python worked. On the latter point, mod_wsgi as a replacement for mod_python has learnt from all the problems mod_python experienced around excessive memory usage and so doesn't suffer the memory usage issues that mod_python did. If using mod_wsgi however, do make sure you are using the latest mod_wsgi version. Those 5 year old versions of mod_wsgi that some LTS variants of Linux ship, especially if Apache 2.2 is used, do have some of those same memory issues that mod_python was effected by in certain corner cases. In short, no one should be using mod_wsgi 3.X any more, use instead the most recent versions of mod_wsgi 4.X and you will be much better off. Alas, various hosting providers still use mod_wsgi 3.X and don't offer a more modern version. If you can't make the hosting provider provide a newer version, then you really should consider moving to one of the newer Docker based deployment options where you can control what version of mod_wsgi is installed as well as how it is configured. Now although one could still manage with a static library back when 32 bit architectures were what was being used, this became impossible when 64 bit architectures were introduced. I can't say I remember or understand the exact reason, but when 64 bit Linux was introduced, attempting to link a static Python library into a dynamically loadable object would fail at compilation link time. The cryptic error message you would get, suggesting some issue related to mixing of 32 and 64 bit code, would be along the lines of: This error derives from those fix ups I mentioned before to allow the static code to run in a dynamically loadable object. What was previously possible for just 32 bit object code, was now no longer possible under the 64 bit Linux systems of the time. In more recent times with some 64 bit Linux systems, it seems that static linking of libraries into a dynamically loadable object may again be possible, or at least the linker will not complain. Even so, where I have seen it being done with 64 bit systems, the user was experiencing strange runtime crashes which went away when steps were taken to avoid static linking of the Python library. So static linking of the Python library into a dynamically loadable object is a bad idea, causing either additional memory usage, failing at link time, or potentially crashing at run time. What therefore is the solution? The solution here is to generate a shared version of the Python library and link that into the dynamically loadable object. In this case all the object code in the Python library will be what is called position independent to begin with and so no fix ups are needed which cause the object code from the library to become process local. Being a proper shared library also now means that there will only be one copy of the code from the Python library in memory across the whole operating system. That is, all processes within the one Python web application will share as common memory space the object code. That isn't all though, as any separate Python applications you start would also share that same code from the Python library in memory. The end result is a reduction in the amount of overall system memory used. Use of a shared library for Python therefore enables applications which want to embed Python via a dynamically loadable object to actually work and has the benefits of cutting down memory usage by applications that use the Python shared library. Although a better solution, when you compile and install Python from source code, the creation of a shared version of the Python library isn't the default, only a static Python library will be created. In order to force the creation of a shared Python library you must supply the '--enable-shared' option to the 'configure' script for Python when it is being built. This therefore is the reason why that option was appearing in the 'CONFIG_ARGS' variable saved away and extracted using 'distutils'. You would think that since the providing of a shared library for Python enables the widest set of use cases for using Python, be they through running the Python executable directly, or by embedding, that this is the best solution. Even though it does work, you will find some who will deride the use of shared libraries and say it is a really bad idea. The two main excuses I have heard from people pushing back on the suggestion of using '--enable-shared' when Python is being built are: - That shared libraries introduce all sorts of obscure problems and bugs for users. - That position independent object code from a shared library when run is slower. The first excuse I find perplexing and actually indicates to a degree a level of ignorance about how shared libraries are used and also how to manage such things as the ability of an application to find the shared library at runtime. I do acknowledge that if an application using a shared library isn't built properly that it may fail to find that shared library at runtime. This can come about as the shared library will only actually be linked into the application when the application is run. To do that it first needs to find the shared library. Under normal circumstances shared libraries would be placed into special ordained directories that the operating system knows are valid locations for shared libraries. So long as this is done then the shared library will be found okay. The problem is when a shared library is installed into a non standard directory and the application when compiled wasn't embedded with the knowledge of where that directory is, or if it was, the whole application and library where installed at a different location to where it was originally intended to reside in the file system. Various options exist for managing this if you are trying to install Python into a non standard location, so it isn't a hard problem. Some still seem to want to make a bigger issue out of it than it is though. As to the general complaint of shared libraries causing other obscure problems and bugs, as much as this was raised with me, they didn't offer up concrete examples to support that claim. For the second claim, there is some technical basis to this criticism as position independent code will indeed run ever so slightly slower where it needs to involve calling of C code functions compiled as position independent code. In general the difference is going to be so minimal as not to be noticeable, only perhaps effecting heavily CPU bound code. To also put things in context, all the Python modules in the standard library which use a C extension module will be affected by this overhead regardless, as they must be compiled as position independent code in order to be able to be dynamically loaded on demand. The only place therefore where this can be seen as an issue is in the Python interpreter core, which is what the code in the Python library implements. Thus the CPU bound code, would also need to principally be pure Python code. When one talks about something like a Python web application however, there is going to be a fair bit of blocking I/O and the potential for code to be running in C extension modules, or even the underlying C code of an underlying WSGI server or web server, such as the case with Apache and mod_wsgi. The difference in execution time between the position independent code of a shared library and that of a static library, in something like a web application, is going to be at the level of background noise and not noticeable. Whether this is really an issue or just a case of premature optimisation will really depend on the use case. Either way, if you want to use Python in an embedded system where Python needs to be linked into a dynamically loadable object, you don't have a choice, you have to have a shared library available for Python. What the issue really comes down to is what the command line Python executable does and whether it is linked with a shared or static Python library. In the default 'configure' options for Python, it will only generate a static library so in that case everything will be static. When you do use '--enable-shared', that will generate a shared library, but it will also result in the Python executable linking to that shared library. This therefore is the contentious issue that some like to complain about. Possibly to satisfy these arguments, what some Linux distributions do is try and satisfy both requirements. That is, they will provide a shared Python library, but still also provide a static Python library and link the static Python library with the Python executable. On a Linux system you can verify whether the Python installation you use is using a static or shared library for the Python executable by looking at the size of the executable, but also by running 'ldd' on the executable to see what shared libraries it is dependent on. If statically linked, the Python executable will be a few MB in size and will not have a dependency on a shared version of the Python library. # ls -las /usr/bin/python2.7 3700 -rwxr-xr-x 1 root root 3785928 Mar 1 13:58 /usr/bin/python2.7# ldd /usr/bin/python2.7 linux-vdso.so.1 (0x00007fff84fe5000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f309388d000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3093689000) libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f3093486000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f309326b000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f3092f6a000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3092bc1000) /lib64/ld-linux-x86-64.so.2 (0x00007f3093aaa000) Jump into the system library directory on a Debian system where using the system Python installation, we see however that a shared Python library does still exist even if the Python executable isn't using it. # ls -o libpython2.7.* lrwxrwxrwx 1 root 51 Mar 1 13:58 libpython2.7.a -> ../python2.7/config-x86_64-linux-gnu/libpython2.7.a lrwxrwxrwx 1 root 17 Mar 1 13:58 libpython2.7.so -> libpython2.7.so.1 lrwxrwxrwx 1 root 19 Mar 1 13:58 libpython2.7.so.1 -> libpython2.7.so.1.0 -rw-r--r-- 1 root 3614896 Mar 1 13:58 libpython2.7.so.1.0 Hopefully in this case then both sides of the argument are happy. The command line Python executable will run as fast as it can, yet the existence of the shared library still allows embedding. Now in the case of Debian, they are doing all sorts of strange things to ensure that libraries are located in the specific directories they require. This is done in the Debian specific packaging scripts. The question then is whether providing both variants of the Python library can be easily done by someone compiling directly from the Python source code. The answer is that it is possible, albeit that you will need to build Python twice. When it comes to installing Python after it has been built, you just need to be selective about what is installed. Normally the process of building and installing Python would be to run the following in the Python source code. ./configure --enable-shared make make install Note that I have left out most of the 'configure' arguments just to show the steps. I have also ignored the issue of whether you have rights to install to the target location. These commands will build and install Python but where a shared library for Python is used and where the Python executable will link the shared library. If we want to have both a static and shared library and for the Python executable to use the static library, we can instead do: ./configure --enable-shared make make install make distclean ./configure make make altbininstall What we are doing here is build Python twice, first with the shared library enabled and then with just the static library. In the first case we will install everything and setup a fully functional Python installation. In the second case however, we will only trigger the 'altbininstall' target and not the 'install' target. When the 'altbininstall' target is used, all that will be installed is the static library for Python and the Python executable linked with the static library. In doing this, the existing Python executable using the shared library will be overwritten by the static linked library. The end result is a Python installation which is a combination of two installs. A shared library for Python for embedding, but also a statically linked Python executable for those who believe that the time difference in the execution of position independent code in the interpreter core is significant enough to be of a concern and so desire speed over anything else. Unicode character sets The next option to 'configure' which needs closer inspection is the '--enable-unicode' option. The name of the option is a bit misleading as Unicode support is these days always compiled into Python. What is being configured with this option is the number of bytes in memory which are to be used for each Unicode character. By default 2 bytes will be used for each Unicode character. Although 2 bytes is the default, traditionally the Python installations shipped by Linux distributions will always enable the use of 4 bytes per Unicode character. This is why the option to 'configure' is actually '--enable-unicode=ucs4'. Since Unicode will always be enabled by default, this option actually got renamed in Python 3.0 and is replaced by the '--with-wide-unicode' option. After the rename, the supplying of the option enables the use of 4 bytes, the same as if '--enable-unicode=ucs4' had been used. The option disappears entirely in Python 3.3, as Python itself from that version will internally determine the appropriate Unicode character width to use. You can read more about that change in PEP 393. Although Python can quite happily be built for either 2 or 4 byte Unicode characters prior to Python 3.3, the reason for using a Unicode character width the same as what the Python version supplied by the Linux distribution uses, is that prior to Python 3.3, what width was chosen affected the Python ABI. Specifically, functions related to Unicode at the C code level in the Python library would be named with the character width embedded within the name. That is, the function names would embed the string 'UCS2' or 'UCS4'. Any code in an extension module would use a generic name, where the mapping to the specific function name was achieved through the generic name actually being a C preprocessor macro. The result of this was that C extension modules had references to functions in the Python library that actually embedded the trait of how many bytes were being used for a Unicode character. Where this can create a problem is where a binary Python wheel is created on one Python installation and then an attempt made to install it within another Python installation where the configured Unicode character width was different. The consequence of doing this would be that the Unicode functions the extension module required would not be able to be found, as they would not exist in the Python library which had been compiled with the different Unicode character width. It is therefore important when installing Python to always define the Unicode character width to be the same as what would traditionally be used on that system for Python installations on that brand of Linux and architecture. By doing that you ensure that a binary Python wheel compiled with one Python installation, should always be able to be installed into a different Python installation of the same major/minor version on a similar Linux system. For Linux systems, as evidenced by the default option of '--enable-unicode=ucs4' being used here with Python 2.7, wide Unicode characters are aways used. This isn't the default though, so the appropriate option does always need to be passed to 'configure' when run. Optional system libraries How to determine what system packages for libraries needed to be installed was determined by looking at what packages were listed as dependencies of the system Python package. Of these there are two which technically are optional. These are the packages for the 'expat' and 'ffi' libraries. The reason these are optional is that the Python source contains its own copies of the source code for these libraries. Unless you tell the 'configure' script by way of the '--with-system-expat' and '--with-system-ffi' options to actually use the versions of these libraries installed by the system packages, then the builtin copies will instead be compiled and used. Once upon a time, using the copy of 'expat' bundled with the Python source code could cause a lot of problems when trying to use Python embedded within another application such as Apache. This was because Apache would link to and use the system package for the 'expat' library while Python used its own copy. Where this caused a problem was when the two copies of the 'expat' library were incompatible. The functions in the copy loaded by Apache could in some cases be used in preference to that built in to Python, with a crash of the process occurring when expected structure layouts were different between the two versions of 'expat'. This problem came about because Python did not originally namespace all the functions exported by the 'expat' library in its copy. You therefore had two copies of the same function and which was used was dependent on how the linker resolved the symbols when everything was loaded. This was eventually solved by way of Python adding a name prefix on all the functions exported by its copy of the 'expat' library so that it would only be used by the Python module for 'expat' which wrapped it. Apache would at the same time use the system version of the 'expat' library. These days it is therefore safe to use the copy of the 'expat' library bundled with Python and the '--with-system-expat' option can be left out, but as the system version of 'expat' is likely going to be more up to date than that bundled with Python, using the system version would still be preferred. The situation with the 'ffi' library is similar to the 'expat' library, in that you can either use the bundled version or the system version. I don't actually know whether the 'ffi' library has to contend with the same sorts of namespace issues. It does worry me that on a quick glance I couldn't see anything in the bundled version where an attempt was made to add a name prefix to exported functions. Even though it may not be an issue, it would still be a good idea to use the system version of the library to make sure no conflicts arise where Python is embedded in another application. Other remaining options The options relating to the generation of a shared library, Unicode character width and system versions of libraries are the key options which you want to pay attention to. What other options should be used can depend a bit on what Linux variant is being used. With more recent versions of Docker now supporting IPV6, including the '--enable-ipv6' option when running 'configure' is also a good bet in case a user has a need for IPV6. Other options may relate more to the specific compiler tool chain or hardware being used. The '--with-fpectl' option falls into this category. In cases where you don't specifically know what an option does, is probably best to include it. Beyond getting the installation of Python itself right, being a Docker image, where space consumed by the Docker image itself is often a concern, one could also consider steps to trim a little fat from the Python installation. If you want to go to such lengths there are two things you can consider removing from the Python installation. The first of these is all the 'test' and 'tests' subdirectories of the Python standard library. These contain the unit test code for testing the Python standard library. It is highly unlikely you will ever need these in a production environment. The second is the compiled byte code files with '.pyc' and '.pyo' extensions. The intent of these files is to speed up application loading, but given that a Docker image is usually going to be used to run a persistent service which stays running for the life of the Docker image, then these files only come into play once. You may well feel that the reduction in image size is more beneficial than the very minor overhead which would be incurred due to the application needing to parse the source code on startup, rather than being able to load the code as compiled byte code. The removal of the '.pyc' and '.pyo' will no doubt be a contentious issue, but for some types of Python applications, such as web service, may be a quite reasonable thing to do. 9 comments: I saw the "--enable-unicode=ucs4" option on a docker image a few days ago and thought that seemed unnecessary. Now I know why. Thanks! Very nice post especially as there's no much information available on this subject. It might be helpful to set RPATH by passing LDFLAGS="-Wl,-rpath," to configure script so that to prevent loading of wrong dynamic libs when invoking python without LD_LIBRARY_PATH set properly. Otherwise one might be surprised for instance to see system python being run after invoking freshly built python executable. This does not apply when python executable is compiled with static libraries of course (which is the case described in the post). @Piotr You are quite correct. If I am building my own on Linux and not bothering with the static linking part and relying only on dynamic libraries, I will always set the environment variable 'LD_RUN_PATH' at compile time with the library directory location. I use the 'LD_RUN_PATH' environment variable rather than use 'LDFLAGS' as I have bad experiences in the distant past with trying to use linker flags, with it causing problems with existing linker flags already trying to set the RPATH. This may well not be an issue these days, but old habits die hard when you find a way that reliably works. :-) Accidentally when compiling Python 2.7.10 after ./configure --prefix /usr/local --enable-shared LDFLAGS="-Wl,-rpath,/usr/local/lib" I saw that the _io module was not built (*** WARNING: renaming "_io" since importing it failed: build/lib.linux-x86_64-2.7/_io.so: undefined symbol: _PyErr_ReplaceException). Setting LD_RUN_PATH instead of passing LDFLAGS resulted in the same error. Now the important thing is that I had previously configured Python 2.7.8 with prefix /usr/local and installed it. Retrying without -Wl,-rpath,/usr/local/lib fixed the problem. Alternatively removing files pertaining to python (old Python 2.7.8) from /usr/local/lib fixed the problem as well. I'm curious what was happening. It looks like passing LDFLAGS="-Wl,-rpath,/usr/local/lib" made something during build/link time use old shared libs of Python 2.7.8 from within /usr/local/lib instead of newly built ones. Something like this may well be the catalyst for me using LD_RUN_PATH instead. From memory, when building Python with a shared library it uses an RPATH to the source directory itself using a relative path so it can find the shared library before it is installed. When you use LDFLAGS it may have been adding that for /usr/local/lib before that, so when trying to run the freshly built Python as part of the build process to do things, it is picking up the wrong, older, shared library and failing. I recollect that the new io subsystem may have been back ported to Python 2.7 in a minor patch revision and why the symbol is undefined in the older library. In other words, the LD_RUN_PATH is applied at a lower priority so to speak and so only gets checked after everything set up by LDFLAGS. Actually one should probably use $ORIGIN for rpath by default so that the compiled Python could be moved to any location afterwards and keep working. The only problem is that this might lead to problems like this one – @Piotr In the context of a Docker container at least, the need for having the Python installation be relocatable is probably unlikely to arise. :-) I was trying to say that unless there are compelling reasons not to one should by _default_ use $ORIGIN. That this will not probably bring any value while inside Docker container is not that important. Think of it as a good habit similar as using rpath in the first place is also kind of good habit :) It's similar to the question of “why bother with a virtualenv if I’m already in a container” –. Also "Why people create virtualenv in a docker container?" – On the issue of using virtualenv in a Python container, it is actually very important to still use one if using a system Python installation, or one from a package collection such as Red Hat Software Collections for RHEL/CentOS. I have blogged about those issues in As to always using $ORIGIN, I am just wary of using magic in the path resolution for shared libraries as have been burnt by various problems with that on MacOS X. I know that is not Linux and how it works is a bit different, but have learnt from such bad experiences that prefer explicit rather than dynamic. So for Docker at least would take opposite view that unless someone can show me a valid need for it, I wouldn't do it that way. Outside of Docker, it may well be valid, especially for people who manage Python installation and packaging themselves to deploy out to their own hosts.
http://blog.dscpl.com.au/2015/06/installing-custom-python-version-into.html
CC-MAIN-2017-30
refinedweb
6,924
57.5
The Video ). ). Parts 1-3 of the tutorial were designed to take you step-by-step through designing the app. If you are wondering what you missed, here is a summary: This is what you'll find in partone: - Downloading and setting up the Android SDK - Downloading the Processing IDE - Setting up and preparing the Android device - Running through a couple of Processing/Android sketches on an Andoid phone. This is what you will find in part two: - Introducing Toasts (display messages) - Looking out for BluetoothDevices using BroadcastReceivers - Getting useful information from a discovered Bluetooth device - Connecting to a Bluetooth Device - An Arduino Bluetooth Sketch that can be used in this tutorial This is what you will find in part three: - InputStreams and OutputStreams - Error Logs using logcat - Testing the InputStreams and OutputStreams - Using the APWidgets library to create buttons - Adding Buttons to the BlueTooth Project In Part 4, we simplify and strip down the App so that it will only sends a specific String to the Arduino via Bluetooth. The String sent to the Arduino depends on the Button being pressed. The code has been cleaned up and has many comments to help you understand what is going on. You should be able to run this sketch without having to go back through parts one, two or three of the tutorial. This fourth part of the tutorial was designed for those people who want the final end product, and are happy to work it out for themselves. I hope this serves you well.one, two and three of this tutorial. I will therefore assume that you have already setup your phone and have downloaded all the neccesary drivers, libraries, SDKs and IDEs. If not, then here are a few quick links: I will therefore assume that you have already setup your phone and have downloaded all the neccesary drivers, libraries, SDKs and IDEs. If not, then here are a few quick links: Make sure that you have selected the Bluetooth permissions as per the following: - Android > Sketch permissions (as per the picture below) Make sure that BLUETOOTH and BLUETOOTH_ADMIN are selected (as per the picture below). Then press the OK button. Then copy and paste the following sketch into the processing/android IDE: Android/Processing Sketch 9: Bluetooth App2 Here is a picture of the components used in this sketch: - Arduino (Freetronics ELEVEN) with the - Bluetooth Shield in place, and a - Grove RGB chainable LED - attached using Grove Universal 4pin cable. Please take notice of the Jumper pin placement on the Bluetooth Shield. This ensures communication between the Arduino and Bluetooth Shield, and is reflected in the Arduino code further down this page. The Arduino transmits information to the Bluetooth Shield on Digital pin 7, and therefore the Bluetooth Shield receives information from the Arduino on Digital pin 7. On the other hand, the Bluetooth shield transmits and the Arduino receives information on Digital pin 6 (see picture below). This serial communication between the Arduino and the Bluetooth Shield occurs through the SoftwareSerial library. This is different from the Serial library used in some of my other tutorials (often to display information in the Serial Monitor). The Arduino UNO's Serial pins are 0 (RX) and 1 (TX). It is worth looking at the Arduino Serial page if you happen to have an Arduino Leonardo, because there are some differences that you should take into consideration when running this sketch. Make sure that your Arduino has the following code installed and running BEFORE you launch the Android/Processing Sketch on your Android Device. If you don't do it in this order, your Android phone will not discover the Bluetooth Device attached to the Arduino, and you will waste a lot of time. Make sure that the Bluetooth shield is flashing it's red/green LEDs. Once you see this alternating red/green LED display, launch the Android/Processing sketch on the Android device. When you see the chainable RGB LED turn from white to green, you know you have a successful connection. You may then press the GUI buttons on the Android phone to change the colour of the LED to either Red, Green, Blue or Off. Arduino Sketch 3: Bluetooth RGB Colour Changer (with OFF option) Please note that this Arduino code/project will work with SeeedStudio's Bluetooth Shield. You may need to modify the Arduino Code (lines 95-107) to coincide with your own bluetooth shield. I got the code snippet within the my setupBlueToothConnection() method from some example code from Steve Chang which was found on SeeedStudio's Bluetooth Shield Wiki page. Here is some other useful information in relation to setting up this Bluetooth Shield that could be of help in your project (here). Much of the code used within the Android/Processing sketch was made possible through constant reference to these sites: Processing Forums to use APWidgets Library for my Buttons in the App. The Arduino and Processing Forums are always a great place to get help in a short amount of time. I am the former head of ecommerce for Billy Bob Teeth. They have landed a TV show on Discovery and filming starts on June 17th. The owner has asked me to bring one of my ideas for a product to life and demonstrate the product for the first episode. The arduino uno that I am using will have two switches and an OLED to read out how many button presses have occured. I am hoping to incorporate a TTL bluetooth module with the arduino that will post the amount of times each buttons has been pressed on an android app. While I was able to follow along with your tutorial quite easily, coding something that is totally different from your tutorial may take up more time than I may have to dedicate to this project. Could I offer you compensation for helping me complete this project? Hi Dustin, Thankyou for visiting this blog and for your consideration. I would love to help you, but I am studying at the moment and do not have any spare time to dedicate towards your project, especially considering the timeframe. Really sorry. Scott. first when i try to connect using my phone its work great and then when i try to connect using other phone, bluetooth shield not discoverable. how can i make my bluetooth shield discoverable to other phone after first connection? or reset connection on bluetooth shield? sory for bad english. Hi Yantz, Perhaps you could attach a button to the Arduino (or bluetooth shield), so that when it is pressed it sends the Bluetooth shield some commands. The Arduino code/method responsible for setting up the bluetooth shield in the code above is void setupBlueToothConnection({...} Have a look at these documents/web pages, as they seem to have some useful commands for bluetooth devices (even though it is not the same bluetooth device used in this example, the code seems the same): Here and here This bluetooth tutorial is using the bluetooth module in Slave mode: And the following code may be what you are looking for: blueToothSerial.print("\r\n+INQ=0\r\n"); Disable been inquired blueToothSerial.print("\r\n+INQ=1\r\n"); Enable been inquired You may need to flush the bluetooth serial connection first by using: blueToothSerial.flush(); Please note that I have not tried this myself, because I have limited time at the moment. So I hope it works for you. Good luck. actualy i just need to disconnect the first pair and then call connection function. thanks for advice. Hello, this has been a great help. However, my setup differs from yours. I have a simple bluetooth module. How should I change the Arduino sketch to be more compatible with my current configuration? The Android app doesn't connect with the bluetooth module (the LED on it stays blinking which means it's NOT connected). However, it can see the bluetooth module and states it through the toasts. Thanks in advance. Hi RB, I don't have your bluetooth module, so am not sure what is the correct Arduino Setup, however, you may find the following article of use: Look here You may need to change the baud rate and a few other settings to get it to work. I accessed that document from the "discussions page" on the site you mentioned in your comment. But I am almost certain that you will need to modify the Arduino sketch in this tutorial to suit your specific bluetooth module. Good luck. RB - also looks like this person has managed to get your bluetooth module to work on an Arduino, and made a YouTube video about it, they may be able to help you ?? You tube video RogueBanana that's an HC-05/06 Bluetooth module mounted on a JY-040 breakout board. It's very simple to use. You hook up the VCC and GND. The TX and Rx behave like TTL serial. That is, you use the Arduino's built in serial functions with a baud rate of 9600. On the BT side, it uses the same UUID for SPP so you just need to change the android code relating to the devices name/mac address which you can get from other apps like BlueTerm. If you need more information on the chip search for "CSR Bluecore EXT 4". Sorry i meant JY-MCU, my board has ZS-040 written on it. my PC doesn't run Processing...Can I use Android Studio?(Oficial IDE).. Thanks Hi FB. I had not heard of Android Studio until you mentioned it. Many of the concepts within my bluetooth tutorial originate from the Android developers website, however, I had to make some minor modifications to get it to work within the processing IDE. Considering that Android Studio comes from the Android developers website, I would say that yes, it would be possible to create an APP using that IDE to communicate to an Arduino via Bluetooth in a similar way to the one I described in my tutorial. However it would be very unlikely that you would be able to cut and paste my code. You would have to adapt it. Or you could start from scratch and utilise the bluetooth information from the Android developers website. You state that Android Studio is the official IDE, however you will probably find that there are hardly any bluetooth examples using it. In fact there were very few bluetooth examples using the Processing-Android IDE when I created my tutorial. There are many more examples using eclipse. Also bear in mind that Android Studio is only available as an "Early access preview" and according to the Android Developer's website, "Several features are either incomplete or not yet implemented and you may encounter bugs". But that should not stop you from trying. All the information is there, it is just a matter of sitting down and working it out :) Good luck. Scott Thank you so much! :D Hello Scott, Excellent article and well presented. I am a newbie to Arduino and bluetooth, and your article has helped me to understand the basics of Android OS communication with BT/Arduino. I was basically looking for text communication between the BT/Arduino and Android, with a little bit of tweaking it is working well with my setup. My setup consists of Arduino 2560 + BT + Samsung Note1 and the code is running like a charm. Thanks for your time and effort in making this article. Hi Metallica, Thanks for the feedback. And I am glad it worked for you. I would be interested in seeing a YouTube video of your project in action. Feel free to leave a link to it below (if you want to). hi my name is miguel i am from mexico, i have a question, i used your Project only part one two and three, for discovery bluetooth and i running on my móvil, i saw when is discovering device show adressed and bond and name of the bluetooth devices of other android bluetooth, my question is if i want to connected to other android device what i need to do, is just connected byself or i need some kind of extra sketch, i want to do this for test the Project the conection between bluetooths the reason because i dont have a bluetooth device for complete the Project and i think if i can get conection between two androids device i guess the conection working thank you, every time i run the App show everithing but dont say nothing about to get connection. Hi Miguel, Sorry, you lost me a bit with your comment. What android device are you using, what bluetooth module are you using and what arduino are you using, and what are you trying to do? What is your final goal ? This may help me understand your question a bit better. I know this is an older post, but I think he's asking if he could use your code to connect two android devices over bluetooth instead of connecting to an arduino BT module. He wants to test the bluetooth connection part of the project with another android instead of the arduino. Hello Scott. Thanks for sharing these applications. I am using Part 4 to control my DIY Segway, but I find that connect() takes 30 seconds or more. Why is cancelDiscovery not called before connect? According to the Android Bluetooth tutorial, not calling cancelDiscovery will slow down the comms. So I tried calling cancelDiscovery before connect but the app crashes during connect. Is that why you don't call cancelDiscovery in part 4? Do you have suggested solution. Hi Carmelo, I think I did experience the same thing you did. Not sure why it crashes. So I left it out, and it worked. I changed my program and based it on Part 3. It works very well -- I don't have to stop Bluetooth before running the program and it connects quickly. I would not recommend Part 4 - I think a single thread to do Bluetooth does not work properly, Hi Scott, Thumbs up for sharing your very informative tutorial. I've noticed you do implement a cancel() method to close the Bluetooth socket, but it does not seem to be used in your code. I ask this because I am using a variation of your part 3 to read continuously from an ArduinoBT board with my Android 4.2 phone. First time it works perfectly, but if I get off the app with the Back or Home buttons and/or switch the ArduinoBT off, next time I want to reconnect it does so up to and including opening the streams (i.e: btSocket.getInputStream) but the inputstream.read(buffer) hangs, as if no bytes were received. I have to reboot the phone to properly reconnect again. So I'm thinking this could be a problem due to not properly closing streams and socket upon closing the app. Do you experience anything similar? Thanks again for your time and effort in sharing your work. Hi Claudio, It has been a while since I wrote this tutorial, but you are right, it is better if you can include a method to close the sockets cleanly. I did experience something similar to you, and yes, rebooting seemed to fix the issue. However, I vaguely remember that if your disconnected in a certain order, that you would not get this problem... But I cannot remember what order that was - it's been too long. Closing your sockets cleanly should fix the issue you are seeing. However, I also had the same issue if the bluetooth module went out of range... I got stuck in figuring out a way to manage that... if you have any ideas, I would love to hear them. Scott, thanks for your quick reply! I'll keep on digging and post any relevant finding here, if any.. Thanks Claudio! Hello Scott, I want to use your tutorial with arduino Mega ADK, and Bluetooth module ( FB155BC) like this : before starting i want to know if these parts are compatible with all your sketch? thanks in advance for your help. regards Hi Laurent, Thank you for visiting my page. My experience with bluetooth has only been with the modules described in the tutorial. While I am sure, there may be parts of the sketches which could remain untouched, I am almost certain that the code would have to be adapted to suit the modules you queried about. And whether it will work with a Mega is anyone's guess, because I have never used one. Sorry, I am not an expert, but you may be able to find out through the various forums. Regards Scott Hey, I was wondering if you would allow me to use your code for a project at school, I'm creating a simple bluetooth controller for one of my teachers, I was wondering if you would allow me to alter the code to fit the project and put put it on the google play store for my teacher's personal use? Hi +zachatttack, You are more than welcome to use my code, and alter it to suit your needs. I would appreciate however that you reference the ArduinoBasics blogspot site somewhere within your project description or code. And finally, feel free to create a link to your project in the comments below - I would love to see it. Regards Scott It was an excellent piece of project to be shared which conveys about arduino programming, android programming and bluetooth of course. Thanks scott. import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; Can you let me know whether the above statements are predefined functions of the android? Thanks again.. Hi Deepak, These are all android classes. Have a look at the android developers website. Also look at the other websites that I have recommended within the blog post. Very useful. Thanks Scott, Yeah Scott. I will surely get through the tutorials. But I got some time limitations. Just one more last question. Pl don't mind. If I run your android programe on the IDE will I still be able to use my android phone for other usual purposes. In other words, will your application will run till I use process IDE on my desktop. If not how will I restore my actual state of my phone after executing the project? Hi Deepak, The processing IDE uploads the program to the android phone in a similar way that the Arduino IDE uploads to the Arduino. However, you can exit out of this program on your phone in the same way you would exit any other program. Also once you have uploaded the program, you can detach the android phone from your computer. Plus you will see a shortcut to the program on your phone's desktop. So you can run it again without having to upload it again from your computer. Hi scott. Just googled and got it. Thanks again... No more doubts. On my way to complete the project. Good luck. I hope it goes well. First off, excellent tutorial!! Everything works great. This question may have been answered at some point or another (which I apologize if it was and you've answered it a million times) but you mentioned being able to have data sent from the Arduino to the Android and displayed on the Android. I'm unable to have that done. Can you perhaps provide some guidance for this?? Ultimately, Id like to have the analogread pin data be sent to the tablet and stored on memory. Thanks!! Have a look at tutorial part 3 here. I think you will find most of what you are looking for in there. I was able to implement the communication. Thanks!! Can you please kindly elaborate a little how you transferred the arduino's analogread pins data to android ? Hi Scott C , thank you for the tutorial , I am new to arduino android, I have my final year project can you please send me the files for the complete project at mosesmberwa@gmail.com All files can be downloaded and installed as per the links provided in the tutorials. I don't have a copy of them. I read your whole step 3 twice but coudnt get the required info! can you kindly tell me which part will guide me for establishing arduino analog read pins data to be transferred to the app? Hi Farhad, I don't have any Analog readings going to the app in this tutorial, but if I wanted to do this, I would write the following in the Arduino code: blueToothSerial.print(analogRead(A0)); This will send one analog reading to the Android device. You will need to get the android device to listen for this value and display it accordingly, I cannot remember if I accommodated for this in this Android sketch or not. Thank you for your reply I will surely start working on it now and i am kind of a beginner in this stuff so please dont mind if I ask some silly questions :) Hello Scott, Thanks for this wonderful tutorial. This helped me a lot starting from setting up the Processing for Android to create some simple app via processing. The question I have is I didn't find an option to communicate from Arduino to Android. As I am a basic user of JAVA may be I got mislead with the class and functions. I can able transmit the messages via Bluetooth from my Arduino but unable to receive them via Bluetooth in my android device. Regards, DREEM. You will find what you need in part 3 of the tutorial (look for InputStreams): Hi scott!this is a wonderful tutorial.I now understand codes use in Bluetooth Processing. My problem now is: how can I make an APK file using the sketches? Good question - you will need to ask that one in the Processing Forums. However, when you run the processing sketches on your phone, it leaves behind a shortcut to the program. So you don't have to keep uploading it to the phone to get it to run. You can load it from the phone. Not sure if this is in an APK format - but you can ask in the forums... sorry I don't know. hi scott! i understand the bluetooth process thanks to your guide. When I connect to the arduino bluetooth module, it ask the password of the module (i.e 1234). Does it have a code where the app is the one to input the password and confirm it? i search and found about "setpin()" something function. Do you know how to use this? No, have never tried to do this. The setpin() method looks like it would put you on the right track. I found this Not sure if it helps or not - have not tried it. There is someone who wrote this on the feedback form.: "Would you mind to gide me how can i write a part of bluetooth code witch will be added to the arduino (bipede) code so that i send command to the robot? I will really apprecchiate your help." I would advise for you to send me your query in the forum section of this blog (look for "Forum" at the top of this page). Hi, thanks for tutorial. This tutorial was made with processing 2.0 but now we work with processing 3.0 and the code has a lot of error, What we can do?, How we can change the code? Not sure. Just work through the errors and try and fix each one by one. Maybe ask for help on each specific error in the processing forums... but don't expect them to fix the whole lot... Sorry - I have left this tutorial here for historical reasons.. but don't have the time at present to go back and fix it. I may get to it one day. Hi , your tutorial was most informative tutorial i have ever seen , thanks. Actually your tutorial helped me a lot but I'm still confused and in the middle, i have project which it send the sensor data from Arduino to Processing and then displays it on application, the first stage without Bluetooth. when i tried to send the data from Arduino to processing the real time data can be displayed when i use the JAVA MODE instead of Android mode ,then when i try to change the same program to Android mode, it simply can't read because "import serial" is not defined in Android mode and shows me error so i can't transfer currently from arduino to Android while the phone is connected via USB and no need to Bluetooth at current stage. my second main question, later on if i want to transfer the real-time data via Bluetooth from Arduino to android applicationis that possible and how? your time is highly appreciated. Thanks in advance Hi Hala, Am not sure about the error in the first question. As for the second question - yes you can transfer data between Arduino and Android - but you would need to create an input stream. Thanks for your response, I have already wrote the code and i found no compiling error for but when i tried to upload on phone i got error message: "unfortunately, "file_name" has stopped" i have tried with two phones. one of them it asked me to turn on the Bluetooth and the other i got the error message without asking for switching the Bluetooth, even though after switch it i got the error message, what can i do ? is this related to security of the phone that can't upload the file or what do u This comment has been removed by a blog administrator. Hala - Unless you have the same Processing IDE version, the same ADK etc etc and an older phone like Samsung Galaxy S2 - this tutorial is unlikely to work (without modification). I have provided all of the links that I used to create this tutorial... and I am sorry that this is now out of date... but treat it as a head start.. you will need to do your own research to get your device to work. I wish I could help you... but I can only tell you what I did to make it work... Current versions of Processing will probably handle the Output stream differently... so that is where I would start.
https://arduinobasics.blogspot.com/2013/04/bluetooth-android-processing-4.html
CC-MAIN-2018-22
refinedweb
4,449
70.73
SuzyUK - 11:41 am on Mar 1, 2011 (gmt 0) I've ranted on long enough about the "warnings" that Google's PageSpeed puts out about "inefficient selectors" and "remove unused CSS, and I thought I might try and discover what might actually make them more "efficient" or to see if looking for those "needle in haystack" unused ones would be worth it. As part of my quest I watch a very long (and sorry, but very boring/dry) video talk by David Baron : Faster HTML and CSS: Layout Engine Internals for Web Developers [youtube.com], I did take from it a bit about Mozilla's theory about how they parse CSS in relation to everything else that needs to go to output a page. and why they have to do it like this because of subsequent or immediate changes any scripts may make to the page rendering Also I came across an Article by Snook [snook.ca], which starts off an subject you wouldn't think is related, but it is.. the article as well as explaining the "lookback" dilemma nicely, then has a further enlightening bit, from snook, in the comments ref: simplifying selectors: This is what I'd always thought, I knew it was Yahoo's (Grids) and Googles (Blueprint) policy to want us all to use class names for everything.. it turns out that it is because they have to as it does affect their rendering, Yahoo page especially is in a constant state of change, so the quicker the make a selector unique (reading it from the right side) the better.. for most normal websites it isn't worth it. The time to lookback through a rarely changing page would be minimal to none. IMO an exception is possibly those dropdown menus.. which I might get to refining later So I made a page, a very simple test page (no images!), made sure it "passed" the test on all points - compression and minimised HTML especially.. IOW I wanted to get to a point where the CSS and only the CSS was affecting the "score" the results slightly surprised me, but not much with 5 very inefficient selectors I still got a score of 100, with a green tick next to the "inefficient selectors" although the arrow could still be clicked to see those 5 repeating those same 5 to have 10, the score did drop - to 99 - with the yellow triangle repeating the 10 to 20 no change, in fact repeating on up to 80 there was no change, however the rules were only repeated which may have had an effect, but I think it's more likely that the "warning" is only worth a minor score drop. The fact that the selectors were repeated did not give "unused selector" warnings I wanted it that way as I didn't want one warning contaminating another. However onto them, Unused Selectors, I deliberately input some, so with the page score at 100 with - 1 very inefficient rule - I added 10 unused EFFICIENT ones (I copied the 1, made it efficient, and changed the class name on all 10 rules to 10 names not in use) then 20 then 30.. no change to score.. it remained at 100, when I made these unused selector inefficient again, the score again dropped to 99 and that was because of the inefficient selector warning not the unused warning! my conclusion (would welcome views!) is that it doesn't matter one single bit about the unused selectors, though in theory it should as they are unnecessary lookbacks.. what was more important was the efficiency of them. IF unused selectors are efficient the "lookback" load must be so tiny it's not worth the effort, nor necessary to "penalise". Further conclusion, if you want the nice green tick, and to actually affect page performance make all selectors, used or not, "VERY efficient" How do you do that? Well the good news is that it doesn't matter about "qualifying" class names or IDs (though qualifying a unique ID shouldn't be necessary if that ID is indeed unique!) - I thought PageSpeed much have got smart enough to figure that out, it was one of my biggest gripes as reuse of class names is a big part of CSS), indeed by the end of my tests I finally realised why that particular warning was there at all! "qualifying a tag" means that you add the element name it belongs to your selector e.g. ul.one li p a - you have qualified the ".one" class by adding the "ul" element to it. This always irked me that it, "qualifying", appeared in warning messages, as it's possible to use a class name on multiple elements (tags) and you may very well need to qualify the class in order to get specific. Theoretically modules or plugins can introduce identical class names too so it's not just a simple case of ensuring your own CSS only attaches a class name to a particular element, e.g. class="list" can only be used on <li> elements, which is most people understanding of the warnings given. So to the Good news.. if you make ul.one li p a efficient.it becomes (for my code - do not take this as gospel without reading further!) by introducing the parent selector (>) you make things really easy for the browsers to lookback through the tree.. it starts at the <a> - reads selectors from right to left - it can immediately start reducing the amount of lookbacks as it no longer has to traverse very far up the tree to see if the next element, <p> is in the <a>'s ascendancy.. it only has to look at it's immediate parent, and so this goes on.. wheareas with the default descendant selector li p a {} it might have to look back the whole document tree, all the way back to the root body tag at times.. to see if there's any chance there's a <p> in an <a>'s ascendancy, before it can then start the lookback all over again to see if all <p>'s matched have an <li> in their ascendancy.. in other words multiple lookbacks may have to occur but with my code, it's not flexible enough.. what happens if I want to target all <a>'s inside an li, no matter if they're inside a <p> or not.. should I write the selector 2. will not work to target the <a>'s inside any other element that happens to also be in the <li>.. e.g. <p> <div> 1. will work! no warnings #1 was the surprise, I really expected it to throw the warning again, but apparently it's smart enough not to warn about the descendant selector if it "knows" that there could be other possibilites.. it fits with something David Baron said in that video too.. make a selector efficient/specific as soon as possible starting at the right side.. you see in my code those <a>'s could have been inside a <p> element which was inside the <li>.. or it could have been inside a <strong> element inside the <li> (they were actually!) - in other words it could not be made more any more efficient without me having to add more rules but .. e.g. ul>li>p>a, ul>li>strong>a {} - btw I removed the <p> and <strong> elements, which would have meant that rule #2 would possibly have been my best choice for efficiency, then rechecked just in case PageSpeed had got really clever matching subtly unused to efficiency.. but no.. it still didn't flag ul>li a {} as inefficient, cool I would have hoped not that would really make "efficient" maintenance an impossibility ;) Lightbulb moment, on qualifying.. ..then I added a class somewhere into the mix, on the ul: ul.one>li a - not an unreasonable request given that one way around most of the warnings is to add classes to everything - I got a warning again though this time it was not "Very Inefficient (good to fix on all pages):" like the previous ones had been.. just inefficient "Inefficient rules (good to fix on interactive pages):" and there's the clue.. interactive pages, most are these days what with JS libraries being built into most CMS'es and other fancy plugins abounding so how to "fix" for all? choices, adding selectors for every contingency ul.one>li>p>a, ul.one>li>strong>a {} is one way, and changing the class to an ID so making it unique and removing the need to qualify the tag too, does it too #one>li a {} or simply ensuring that you never have clashing class names (fairly impossible) so removing the need to qualify too.. most modules or plugins to the page will come with its own "namespaced" CSS i.e. it will already be wrapped in a uniquely identified div, this is so the JS/required interactivity can be targeted.. the module CSS that accompanies it may not be as efficient as it can be however ;) In the grand scale of things this was just an eye opening information exercise, it's probably not worth it for smaller sites as Snook points out, however if you want to squeeze every last drop of performance out of your page, or just want that green tick... refactor your CSS or at the very least your module/plugin/addon CSS, by adding to your selectors :o.. and if you're worried about unused selectors, DONT, just make them efficient, then it won't matter if they're there or not (it should no longer affect the score or speed) however the act of refactoring them to be more efficent should help you decide if they're required or not ;) Any thoughts or other discoveries or any views on when/if it's worth it to refactor? Suzy
http://www.webmasterworld.com/printerfriendlyv5.cgi?forum=83&discussion=4274514&serial=4274516&user=
CC-MAIN-2014-10
refinedweb
1,661
62.82
I am trying to make a 2d skateboarding game and I need to make the player rotate based on the rotation of the block under neath the player. This way skating down a ramp the board is on the ground (I am kind of new and this is how I would think of it) Basically I was thinking of casting a ray or somthing that would detect the platform underneath the player. Then it would then find the rotation of that platform and rotate the player perpendicular to that. Does anyone know how to find the rotation of an object underneath a player object? comment don't make that an answer. cast a ray from player.transform down. take the inverse(-) of the normal of the surface you hit use those to find the axis of rotation and the angle, I know you've no idea how to find those with just that information so i've posted the code. axis = vector3.cross(-player.transform.up,-hit.normal); if(axis != vector3.zero) angle = atan2(vector3.magnitude(axis), vector3.dot(-player.transform.up,-hit.normal)); player.transform.rotatearound(axis,angle); this will rotate the character so his feet are always parallel with the ground. mark as answered and have a nice day THANK YOUUU...this works great. Just one tinny little issue. Would i be able to rotate the player slowly when its is moving slowly against a curvy platform? it rotate very quick and sharp atm if you can understand what im tryna say?? use slerp to rotate over time instead of instantly. Thank you for replying and ya sorry about submitting that as a an anwser. I don't have time to look at it today because I have tests to study for, but once I look at it and if it works I will make it the anwser. what is hit supposed to be? Answer by mkjrfan · Jan 25, 2013 at 09:06 PM No idea why but for some reason sparkzbarca's anwser became a comment and now neither I nor anyone else can see the rest of the conversation even if you coment on it. This is what he wrote as the anwser: Ray ray = new Ray(); RaycastHit hit; Vector3 axis; float angle; ray.origin = transform.position; ray.direction = -transform.up; Physics.Raycast(ray, out hit); axis = Vector3.Cross(-transform.up,-hit.normal); if(axis != Vector3.zero) { angle = Mathf.Atan2(Vector3.Magnitude(axis), Vector3.Dot(-transform.up,-hit.normal)); transform.RotateAround(axis,angle); } it works. using System.Collections; using System.Collections.Generic; using UnityEngine; public class Collide : MonoBehaviour { Ray ray = new Ray(); RaycastHit hit; float angle; public bool isTurning; Quaternion pastRot; float rotX; float rotY; float rotZ; // Use this for initialization void Start () { } // Update is called once per frame void Update () { pastRot = this.transform.rotation; ray.origin = transform.position; ray.direction = -transform.up; Physics.Raycast(ray, out hit); rotX = hit.transform.rotation.x - transform.rotation.x; rotZ = hit.transform.rotation.z - transform.rotation.z; this.transform.rotation *= Quaternion.Euler(rotX * 100 * Time.deltaTime, rotY * 100 * Time.deltaTime, rotZ * 100 * Time.deltaTime); transform.parent = hit.transform; if (this.transform.rotation != pastRot) { isTurning = true; } if (this.transform.rotation == pastRot) { isTurning = false; } } } This is my code. I used sparkzbarcas and it ended up rotating on the y axis if the x and z axis were both non-zero values for me. Answer by nantoaqui · Dec 12, 2017 at 11:22 PM Hello!! Works perfectly! I would like to know why are we using Vector3.Dot in this case to get the x position? Thank you! Answer by maxisbestest · Mar 29, 2019 at 11:59 AM Thanks so much this is really. Spinning rigidbody platform in 2D 0 Answers Problem with rotated colliders 0 Answers Simple Topdown Movement Problem 1 Answer Need help with a relatively simple issue. 1 Answer [Unity Issue] CharacterController falls through rotating platform 1 Answer
https://answers.unity.com/questions/384809/rotating-based-on-the-platform-the-player-is-stand.html
CC-MAIN-2020-05
refinedweb
649
57.37
in reply to Re^2: Recursive locks:killer application. Do they have one? (mu) in thread Recursive locks:killer application. Do they have one?. - tye. don't see how one can implement blocking locking using a single bit. Here you go: #include <windows.h> #include <stdio.h> #include <time.h> #include <process.h> typedef struct { void *protected; int loops; } args; void lock( void *protected ) { while( _interlockedbittestandset64( (__int64*)protected, 0 ) ) { Sleep( 1 ); } } void unlock( void *protected ) { _interlockedbittestandreset64( (__int64*)protected, 0 ); } void worker( void *arg ) { args *a = (args*)arg; int i = 0; for( i=0; i < a->loops; ++i ) { lock( a->protected ); *( (int*)a->protected ) += 2; unlock( a->protected ); } return; } void main( int argc, char **argv ) { int i = 0, nThreads = 4; clock_t start, finish; double elapsed; uintptr_t threads[32]; int shared = 0; args a = { (void *)&shared, 1000000 };; if( argc > 1 ) nThreads = atol( argv[1] ); if( argc > 2 ) a.loops = atol( argv[2] ); printf( "threads:%d loops:%d\n", nThreads, a.loops ); start = clock(); for( i=0; i < nThreads; ++i ) threads[ i ] = _beginthread( &worker, 0, &a ); WaitForMultipleObjects( nThreads, (HANDLE*)&threads, 1, INFINITE ) +; finish = clock(); elapsed = (double)(finish - start) / CLOCKS_PER_SEC; printf( "count: %lu time:%.6f\n", shared, elapsed ); } [download] And a run with 32 threads all contending to add 2 to a shared integer 1 million times each: C:\test\lockfree>bitlock 32 1000000 threads:32 loops:1000000 count: 64000000 time:1.332000 [download] And implemented using the simplest primitive possible -- one that will be available in some form on any modern processor. And therein lies the rub. Perl implements it own recursive locking in terms of pthreads 0.1 primitives. But those "speced" pthreads primitives have long since been superseded on every modern *nix system by vastly more efficient effective and flexible primitives -- eg. futexes -- which already have recursive capabilities. And then on other platforms -- ie. windows -- the pthreads 0.1 primitives are clumsily emulated using oldest, least effective OS primitives. Everyone, everywhere is getting big, slow, clumsy emulations of a defunct standard instead of being able to use the modern, efficient, effective mechanisms that have evolved since the pthreads api was frozen in stone. And all those "so Linux users can" and "so somebody can" are pie-in-the sky, what-ifs and maybes that can never happen for perl users anywhere. Typical, lowest common denominator stuff.". Hell yes! Definitely not I guess so I guess not Results (41 votes), past polls
http://www.perlmonks.org/index.pl/jacques?node_id=951551
CC-MAIN-2014-52
refinedweb
400
65.42
Categories organization jobs... Non-profit organization needs expert in website development, search engine optimization, and internet marketing. English must be primary language as you will be communicating extensively by phone/skype with our organization. Please do not contact us if you communicate only through chat, text, and email. Looking for a writer with on page SEO Experience On page optimisation for 200...including homepage- 160 meta descriptions & 70 meta title per category - To apply: - - Must be fluent in English and must show samples of meta optimisation Tasks:- 144 Categories - 70 Characters Meta - Title - 160 Meta Descriptions - Homepage optimizations Need menu items clickable, to correspond with categories. Then, test to ensure it is working properly. The site is [url removed, login to view] Hi, I have an app in the main screen of it as you can see in the attachments there is 7 categories so i want to design an icon for each one of them. The style of icons i am looking for is something like image below and if you have another good idea to fit the app need in term of design you are welcome to post it. [url removed, login to view] icons description: .. Assist with Project Management for Cutting Image hair care products - a non profit organization called #mpsSuperHero - and a clothing line. I need a logo designed. Kindly visit [url removed, login to view] for company profile. Send details [Removed by [url removed, login to view] Admin] Using freelancer api i need to list download all feedback with categories Very small 1 hour job for api experts Extract phone numbers from website with many categories Looking for webdesigner specialised for Opencart to help me with importing goods... .. would track these costs, what! We need modern & professional icons. Please see attached file. import products and categories for a ink cartridge website. i give csv file with all products + photos and csv file for categories urgent task
https://www.freelancer.com/job-search/categories-organization/
CC-MAIN-2017-34
refinedweb
321
62.98
Blynk Weather Station 268 1 Receive weather updates directly to your mobile device from your very own weather station! Astonishingly quick & easy build with xChips. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Things Used in This Project Hardware components - XinaBox CW01 x 1 - XinaBox SW01 x 1 - XinaBox SL01 x 1 - XinaBox OD01 x 1 - XinaBox IP01 x 1 - XinaBox XC10 x 1 Software apps and online services Step 2: Story Introduction I built this project using XinaBox xChips and Arduino IDE. It is a 5 min project , that allows you to receive weather data on your phone via the Blynk app and on the OLED screen of the OD01. This project is so useful because you can monitor weather wherever you choose and get updates directly on your phone via the app. I chose to use the xChips because they are user friendly, they also eliminate the need for soldering and serious circuit design. Using Arduino IDE I could easily program the xChips. Step 3: Downloading the Libraries - Go to Github.xinabox - Download xCore ZIP - Install it into Arduino IDE by going to "Sketch", "Include Library", then "Add .ZIP Library". As seen below Figure 1: Adding the ZIP libraries - Download xSW01 ZIP - Add the library the same way as you did for the xCore. - Repeat for the xSL01 and xOD01 - You also need install the Blynk library so you can use the app. You can find it here - Before you can programme you need to ensure you're using the correct board. In this project I make use of the Generic ESP8266 which is in the CW01 xChip. You can download the board library here. Step 4: Programming - Connect the IP01, CW01, SW01, SL01 and OD01 using xBUS Connectors. Make sure the xChips' names are orientated correctly. Figure 2: Connected xChips - Now insert the IP01 and connected xChips into an available USB port. - Download or copy and paste the code from the "CODE" heading into your Arduino IDE. Enter your auth token, WiFi name and password where indicated. - Alternatively you could create your own code using the relevant principles to achieve the same objective - To ensure there are no errors compile the code. Step 5: Blynk Setup - After installing the Blynk app free from your app store it is time to do the Project Setup. - Before clicking "Log In" after entering your email address and password ensure your "Server Settings" are set to "BLYNK". Figure 3: Server Settings - Create New Project. - Choose device "ESP8266" Figure 4: Choosing the device/board - Assign a project name - Receive "Auth Token" notification and email containing the "Auth Token". Figure 5: Auth Token notification - Go to the "Widget Box" Figure 6: Widget Box - Add 4 "Buttons" and 4 "Value Displays" - Assign the respective "Buttons" and "Value Displays" their Virtual Pins as specified in the "CODE". I used even numbers for "Buttons" and corresponding odd numbers for the "Value Displays" - This setup can be adjusted to suit your needs as you adjust your code. Figure 7: Project Dashboard (Disclaimer: Ignore the values this is a screenshot after I tested the weather station. Yours should be similar, just with blank faces like V7) Step 6: Uploading the Code - After successful compilation in Step 2(no errors found) you may upload the code to your xChips. Ensure the switches are facing "B" and "DCE" respectively before uploading. - Once the upload is successful, open the Blynk app on your mobile device. - Open your project from Step 3. Figure 8 - Press play and press the respective "Buttons" so that the data can be shown in your app and on the OLED screen. - Now your Blynk weather station is ready to GO! Step 7: Code Blynk_Weather_Station.ino Arduino Arduino code for Weather Station with Blynk and xCHIPS. This code allows you to wirelessly control the weather station from your mobile device and receive weather data updates straight to your mobile device from the xCHIP weather station #include <xCore.h><xcore.h> //include core library #include <xSW01.h><xsw01.h> //include weather sensor library #include <xSL01.h><xsl01.h> //include light sensor library #include <ESP8266WiFi.h><esp8266wifi.h> //include ESP8266 library for WiFi #include <BlynkSimpleEsp8266.h><blynksimpleesp8266.h> //include Blynk library for use with ESP8266 #include <xOD01.h><xod01.h> //include OLED library<br><br></xod01.h></blynksimpleesp8266.h></esp8266wifi.h></xsl01.h></xsw01.h></xcore.h>xSW01 SW01; //xSL01 SL01; float TempC; float Humidity; float UVA; float UV_Index; // authentication token that was emailed to you // copy and paste the token between double quotes char auth[] = "your auth token"; // your wifi credentials char WIFI_SSID[] = "your WiFi name"; // enter your wifi name between the double quotes char WIFI_PASS[] = "your WiFi password"; // enter your wifi password between the double quotes BlynkTimer timer; // VirtualPin for Temperature BLYNK_WRITE(V2){ int pinValue = param.asInt(); // assigning incoming value from pin V1 to a variable if(pinValue == 1) { Blynk.virtualWrite(V1, TempC); OD01.println("Temp_C:"); OD01.println(TempC); } else{ } } // VirtualPin for Humidity BLYNK_WRITE(V4){ int pin_value = param.asInt(); // assigning incoming value from pin V3 to a variable if(pin_value == 1) { Blynk.virtualWrite(V3, Humidity); OD01.println("Humidity:"); OD01.println(Humidity); } else{ } } // VirtualPin for UVA BLYNK_WRITE(V6){ int pinvalue = param.asInt(); // assigning incoming value from pin V5 to a variable if(pinvalue == 1) { Blynk.virtualWrite(V5, UVA); OD01.println("UVA:"); OD01.println(UVA); } else{ } } // VirtualPin for UV_Index BLYNK_WRITE(V8){ int pin_Value = param.asInt(); // assigning incoming value from pin V7 to a variable if(pin_Value == 1) { Blynk.virtualWrite(V7, UV_Index); OD01.println("UV_Index:"); OD01.println(UV_Index); } else{ } } void setup() { // Debug console TempC = 0; Serial.begin(115200); Wire.begin(2, 14); SW01.begin(); OLED.begin(); SL01.begin(); Blynk.begin(auth, WIFI_SSID, WIFI_PASS); delay(2000); } void loop() { SW01.poll(); TempC = SW01.getTempC(); Humidity = SW01.getHumidity(); SL01.poll(); UVA = SL01.getUVA(); UV_Index = SL01.getUV Index(); Blynk.run(); } Discussions
https://www.instructables.com/id/Blynk-Weather-Station/
CC-MAIN-2019-43
refinedweb
984
57.57
Upgrade Your Game: Crusader (C#) - Posted: Nov 09, 2006 at 6:31 AM - 5,991 Views - 11 Comments. fourth (C#). C# public Sprite(GameState gameState, float x, float y, Bitmap bitmap, Rectangle rectangle, int numberAnimationFrames) { for (int i = 0; i < numberAnimationFrames; i++) { )); } } public override void Draw(Graphics graphics) { graphics.DrawImage(_frames[CurrentFrame], outputRect, _rectangle[CurrentFrame].X, _rectangle[CurrentFrame].Y, _rectangle[CurrentFrame].Width, _rectangle[CurrentFrame].Height, GraphicsUnit.Pixel); } C# / could to "2D Game Primer (C#)" is broken and produces a "Page not found" screen. A search finds the document at The link to "2D Game Primer (C#)" is still broken and the link provided by John Tempest (above) no longer works as well. A way to save the game would be a great addon, how would it be done? Here's a working link. The Files table is getting chopped off on the right. I noticed this on some of the other pages (TinyTennis with XNA I believe). You might want to look into it. I really enjoy this articles, but it is a huge distraction to see part of the page missing. Other than that, I want to thank you for putting these together. They are an excellent resource. Looks like the "2D Game Primer (C#)" document can now be found at hey! I love these tutorials, helps me out a lot. Well, I was wondering if someone can make a 2D Real-Time Strategy game? That would be the best ever! On a side note... Transparency for an image is ALOT easier than using _attributes= new ImageAttributes(); _attributes.SetColorKey(_colorKey, _colorKey); ! Just try Bitmap item = new Bitmap(Application.StartupPath + "\\Images\\Items\\Ring1.png"); item.MakeTransparent(Color.Fuchsia); g.DrawImage(b, 0, 0); Fuschia is that weird Purply color 1 Right of Dark-Blue (Bottom Row), and one left of Yellow(Bottom Row) in MS-Paint On another side note, tinyUpload started working again! for the RPG. If you want to see the RTS, just e-mail me The RPG is in VERY early stages... Just movement, map-loading (Excluding the map-editor I coded... But this can be requested), Player-Rotation, Test-Commands, NPC's, and collision. If the tinyupload link is down, feel free to e-mail me reelix@gmail.com I run the Crusader file from the unzipped folder in XNA 3.0 Visual C#, and it gives me a bunch of errors Warning 1 Load of property 'RootNamespace' failed. The string for the root namespace must be a valid identifier. Crusader Error 2 Unexpected character '$' C:\Documents and Settings\Frank\My Documents\FinalFantasy\Crusader\BitmapCache.cs 6 11 Crusader Error 3 Unexpected character '$' C:\Documents and Settings\Frank\My Documents\FinalFantasy\Crusader\BitmapCache.cs 6 27 Crusader Error 4 Identifier expected C:\Documents and Settings\Frank\My Documents\FinalFantasy\Crusader\BitmapCache.cs 6 11 Crusader Help me out, I need this tutorial to make some progress.... swaggin207@yahoo.com please, thanks a lot SWAG @SWAG, Coding4Fun articles are written as is, we don't have the resources to keep years of articles up to date on the newest version. Looking at the date, I'm betting this was written with XNA 1.0 which I know there was some breaking changes that need to be corrected if you want to run versions up. I did a quick search and this may help. Thanks for the tutorial! Remove this comment Remove this threadclose
http://channel9.msdn.com/coding4fun/articles/Upgrade-Your-Game-Crusader-C
CC-MAIN-2014-15
refinedweb
566
58.58
Enhance UDL lexer rgb(r, g, b): ''' Helper function Retrieves rgb color triple and converts it into its integer representation Args: r = integer, red color value in range of 0-255 g = integer, green color value in range of 0-255 b = integer, blue color value in range of 0-255 Returns: integer ''' return (b << 16) + (g << 8) + r ())): ''' Define basic indicator settings, the needed regexes as well as the lexer name. Args: None Returns: None '''([ ( ''')) self.doc_is_of_interest = True if buffer.value == self.lexer_name else False def on_bufferactivated(self, args): ''' Callback which gets called every time one switches a document. Triggers the check if the document is of interest. Args: provided by notepad object but none are of interest Returns: None ''' self.check_lexer() def on_updateui(self, args): ''' Callback which gets called every time scintilla (aka the editor) changed something within the document. Triggers the styling function if the document is of interest. Args: provided by scintilla but none are of interest Returns: None ''' if self.doc_is_of_interest: self.style() def on_langchanged(self, args): ''' Callback gets called every time one uses the Language menu to set a lexer Triggers the check if the document is of interest Args: provided by notepad object but none are of interest Returns: None ''' self.check_lexer() def If this is still an issue for you, let me know. The idea of the script is to enhance the existing UDL with coloring which otherwise isn’t possible to do with the builtin lexer, means, script, normally, runs together with the UDL lexer. I found this and am quite interested in it but am not sure how to implement this. I’m an engineer not a CS so my programing is only fair. Can someone tell me how to actually use this in notepad++ or where to go to read what I will need to get it going? Thanks You must have installed the PythonScript plugin, which you can do via PluginAdmin in the plugin menu. Use plugins->pythonscript->new script and save it under a meaningful name. Copy the content from here into the script. Save it. Now you need to define the regexes to add additional colours to the lexer. The script is commented, let me know if anything is unclear. @ekopalypse I have now had a chance to give this a solid go and have not been able to get it working. To start with I installed the PythonScript plugin. In your comments I do not understand what is meant by “d = integer, denotes which match group should be considered” because I’m not clear on what a match group does. I defined my own user defined language ml and then just tried to highlight letters in one color and digits in another. I followed your examples and now have the following in the file: ml_regexes = _dict() ml_regexes = [(0, (0, 0, 224))] = (r’\d’, 0) ml_regexes = [(1, (224, 0, 0))] = (r’\w’, 0) ml_excluded_styles = [] _enhance_lexer = EnhanceLexer() _enhance_lecer.register_lexer(‘ml’, ml_regexes, ml_excluded_styles) I saved the modified code as ml.py in the folder C:\Users\CS_laptop\AppData\Roaming\Notepad++\plugins\config\PythonScript\scripts Then I made a new file to test with containing the following: hello 12345 I set language to the empty user defined language ml. Then I went to Plugins>Python Scripts>scripts> and selected ml. This produced no result. Please let me know where I have gone wrong. Thanks for your help. @c-siebester said in Enhance UDL lexer: An error has crept in here that ml_regexes = [(0, (0, 0, 224))] = (r’\d’, 0) ml_regexes = [(1, (224, 0, 0))] = (r’\w’, 0) is not valid Python code, it must be like this ml_regexes[(0, (0, 0, 224))] = (r’\d’, 0) ml_regexes[(1, (224, 0, 0))] = (r’\w’, 0) If you open the PythonScript Console, this should also appear as an error when you run the script. Regarding match groups, we assume the following regular expression \d\d\d. This expression returns at most one match if it can find 3 consecutive digits. If the expression were \d(\d)\d, the regex engine would produce two matches, the standard match of the 3 digits and a second match of the middle digit. This is reflected by the number in the regular expression. If there is a 0, the standard match, which is always present if something is found, is determined and coloured. If there were a 1, only the 2nd match would be taken into account. Of course, only if there is a corresponding regular expression, as in my second example. Does that make sense? @ekopalypse You’re right. I changed it to the following. ml_regexes [(0, (0, 0, 224))] = (r’\d*‘, 0) ml_regexes [(1, (224, 0, 0))] = (r’\w*', 0) Obviously I have more interesting things I will want to match, I’m just trying to get it working. I’m still not getting any highlighting in my test document with the hello and 12345. Maybe I don’t have the setup right?
https://community.notepad-plus-plus.org/topic/17134/enhance-udl-lexer/?lang=en-US&page=2
CC-MAIN-2022-21
refinedweb
829
70.84
My login and signup forms are on home/index.html.erb. I want that when any of them fail error should show on index.html.erb . Now it redirect to devise views. How can I do that? Note : I want this only for signup and login. Password Forgot scenario will remain devise you can do like this: create class on lib folder and overwrite devise failure class CustomFailure < Devise::FailureApp def redirect_url #your path end def respond if http_auth? http_auth else redirect end end end And put this config/initializers/devise.rb config.warden do |manager| manager.failure_app = CustomFailure end One more thing you have to auto load lib file like: config.autoload_paths << Rails.root.join('lib') put this line at config/application.rb I hope that will work EDITED def redirect_url if request.referrer.include? new_user_session_path.split("/").last #your path else new_user_session_path end end
https://codedump.io/share/0qZ9gCQXXNI7/1/redirect-after-only-signup-and-login-failure-in-devise-rails
CC-MAIN-2017-17
refinedweb
145
52.56
0 hello im having problem trying to get this to do what i want it to do. after the program asks if the user has had any work up to date, i want it to ask for specification if the answer is yes, or go to a next question if no. except right now when the user inputs no, it still asks for specification. if the user inputs yes i want the specification to show in the output at the end. if user said no work was done i just want it to read as a simple no work done. any help would be appreciated. thanks alot! #include <iostream> #include <string> using namespace std; int main() { string Pname, CEmployer, AWUTDspecification; int age, n, y; char AWUTD; n=0; y=0; cout<<"Patients Name:"<<endl; getline (cin, Pname); cout<<"Age:"<<endl; cin>>age; cin.ignore(1000,'\n'); cout<<"Current Employer:"<<endl; getline (cin, CEmployer); cout<<"Any Work-Up to Date? Y/N"<<endl; cin>>AWUTD; cin.ignore(1000,'\n'); if(AWUTD=='y'||'Y') { cout<<"Please Specify:"<<endl; getline (cin, AWUTDspecification); } else if (AWUTD=='n'||'N') { cout<<endl; } cout<<"Patients Name:"<<Pname<<endl; cout<<"Patients Age:"<<age<<endl; cout<<"Patients Current Employer:"<<CEmployer<<endl; cout<<"Work up to date:"<<AWUTD<<endl; };
https://www.daniweb.com/programming/software-development/threads/186035/help-with-if-statement-and-output
CC-MAIN-2017-34
refinedweb
209
62.78
The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program. Introduction MariaDB is an open source version of the popular MySQL relational database management system (DBMS) with a SQL interface for accessing and managing data. It is highly reliable and easy to administer, which are essential qualities of a DBMS capable of serving modern applications. With Python’s growing popularity in technologies like artificial intelligence and machine learning, MariaDB makes a good option for a database server for Python. In this tutorial, you will connect a Python application to a database server using the MySQL connector. This module allows you to make queries on the database server from within your application. You’ll set up MariaDB for a Python environment on Ubuntu 18.04 and write a Python script that connects to and executes queries on MariaDB. Prerequisites Before you begin this guide, you will need the following: Step 1 — Preparing and Installing In this step, you’ll create a database and a table in MariaDB. First, open your terminal and enter the MariaDB shell from the terminal with the following command: Once you’re in the MariaDB shell, your terminal prompt will change. In this tutorial, you’ll write Python to connect to an example employee database named workplace and a table named employees. Start by creating the workplace database: - CREATE DATABASE workplace; Next, tell MariaDB to use workplace as your current database: You will receive the following output, which means that every query you run after this will take effect in the workplace database: OutputDatabase changed Next, create the employees table: - CREATE TABLE employees (first_name CHAR(35), last_name CHAR(35)); In the table schema, the parameters first_name and a last_name are specified as character strings ( CHAR) with a maximum length of 35. Following this, exit the MariaDB shell: Back in the terminal, export your MariaDB authorization credentials as environment variables: - export username="username" - export password="password" This technique allows you to avoid adding credentials in plain text within your script. You’ve set up your environment for the project. Next, you’ll begin writing your script and connect to your database. Step 2 — Connecting to Your Database In this step, you will install the MySQL Connector and set up the database. In your terminal, run the following command to install the Connector: - pip3 install mysql-connector-python pip is the standard package manager for Python. mysql-connector-python is the database connector Python module. Once you’ve successfully installed the connector, create and open a new file Python file: In the opened file, import the os module and the mysql.connector module using the import keyword: database.py import os import mysql.connector as database The as keyword here means that mysql.connector will be referenced as database in the rest of the code. Next, initialize the authorization credentials you exported as Python variables: database.py . . . username = os.environ.get("username") password = os.environ.get("password") Follow up and establish a database connection using the connect() method provided by database. The method takes a series of named arguments specifying your client credentials: database.py . . . connection = database.connect( user=username, password=password, host=localhost, database="workplace") You declare a variable named connection that holds the call to the database.connect() method. Inside the method, you assign values to the user, host, and database arguments. For user and password, you will reference your MariaDB authorization credentials. The host will be localhost by default if you are running the database on the same system. Lastly, call the cursor() method on the connection to obtain the database cursor: database.py . . . cursor = connection.cursor() A cursor is a database object that retrieves and also updates data, one row at a time, from a set of data. Leave your file open for the next step. Now you can connect to MariaDB with your credentials; next, you will add entries to your database using your script. Step 3 — Adding Data Using the execute() method on the database cursor, you will add entries to your database in this step. Define a function add_data() to accept the first and last names of an employee as arguments. Inside the function, create a try/except block. Add the following code following your cursor object: database.py . . . def add_data(first_name, last_name): try: statement = "INSERT INTO employees (first_name,last_name) VALUES (%s, %s)" data = (first_name, last_name) cursor.execute(statement, data) connection.commit() print("Successfully added entry to database") except database.Error as e: print(f"Error adding entry to database: {e}") You use the try and except block to catch and handle exceptions (events or errors) that disrupt the normal flow of program execution. Under the try block, you declare statement as a variable holding your INSERT SQL statement. The statement tells MariaDB to add to the columns first_name and The code syntax accepts data as parameters that reduce the chances of SQL injection. Prepared statements with parameters ensure that only given parameters are securely passed to the database as intended. Parameters are generally not injectable. Next you declare data as a tuple with the arguments received from the add_data function. Proceed to run the execute() method on your cursor object by passing the SQL statement and the data. After calling the execute() method, you call the commit() method on the connection to permanently save the inserted data. Finally, you print out a success message if this succeeds. In the except block, which only executes when there’s an exception, you declare database.Error as e. This variable will hold information about the type of exception or what event happened when the script breaks. You then proceed to print out an error message formatted with e to end the block using an f-string. After adding data to the database, you’ll next want to retrieve it. The next step will take you through the process of retrieving data. Step 4 — Retrieving Data In this step, you will write a SQL query within your Python code to retrieve data from your database. Using the same execute() method on the database cursor, you can retrieve a database entry. Define a function get_data() to accept the last name of an employee as an argument, which you will call with the execute() method with the SELECT SQL query to locate the exact row: database.py . . .}") Under the try block, you declare statement as a variable holding your SELECT SQL statement. The statement tells MariaDB to retrieve the columns first_name and last_name from the employees table when a specific last name is matched. Again, you use parameters to reduce the chances of SQL injection. Smilarly to the last function, you declare data as a tuple with last_name followed by a comma. Proceed to run the execute() method on the cursor object by passing the SQL statement and the data. Using a for loop, you iterate through the returned elements in the cursor and then print out if there are any successful matches. In the except block, which only executes when there is an exception, declare database.Error as e. This variable will hold information about the type of exception that occurs. You then proceed to print out an error message formatted with e to end the block. In the final step, you will execute your script by calling the defined functions. Step 5 — Running Your Script In this step, you will write the final piece of code to make your script executable and run it from your terminal. Complete your script by calling add_data() and get_data() with sample data (strings) to verify that your code is working as expected. If you would like to add multiple entries, you can call add_data() with further sample names of your choice. Once you finish working with the database make sure that you close the connection to avoid wasting resources: connection.close(): database.py import os import mysql.connector as database username = os.environ.get("username") password = os.environ.get("password") connection = database.connect( user=username, password=password, host=localhost, database="workplace") cursor = connection.cursor() def add_data(first_name, last_name): try: statement = "INSERT INTO employees (first_name,last_name) VALUES (%s, %s)" data = (first_name, last_name) cursor.execute(statement, data) cursor.commit() print("Successfully added entry to database") except database.Error as e: print(f"Error adding entry to database: {e}")}") add_data("Kofi", "Doe") get_data("Doe") connection.close() Make sure you have indented your code correctly to avoid errors. In the same directory, you created the database.py file, run your script with: You will receive the following output: OutputSuccessfully added entry to database Successfully retrieved Kofi, Doe Finally, return to MariaDB to confirm you have successfully added your entries. Open up the MariaDB prompt from your terminal: Next, tell MariaDB to switch to and use the workplace database: After you get the success message Database changed, proceed to query for all entries in the employees table: You output will be similar to the following: Output+------------+-----------+ | first_name | last_name | +------------+-----------+ | Kofi | Doe | +------------+-----------+ 1 row in set (0.00 sec) Putting it all together, you’ve written a script that saves and retrieves information from a MariaDB database. You started by importing the necessary libraries. You used mysql-connector to connect to the database and os to retrieve authorization credentials from the environment. On the database connection, you retrieved the cursor to carry out queries and structured your code into add_data and get_data functions. With your functions, you inserted data into and retrieved data from the database. If you wish to implement deletion, you can build a similar function with the necessary declarations, statements, and calls. Conclusion You have successfully set up a database connection to MariaDB using a Python script on Ubuntu 18.04. From here, you could use similar code in any of your Python projects in which you need to store data in a database. This guide may also be helpful for other relational databases that were developed out of MySQL. For more on how to accomplish your projects with Python, check out other community tutorials on Python.
https://www.xpresservers.com/tag/store/
CC-MAIN-2021-21
refinedweb
1,673
55.03
![if !IE]> <![endif]> POINTERS Definition: § C Pointer is a variable that stores/points the address of the another variable. § C Pointer is used to allocate memory dynamically i.e. at run time. § The variable might be any of the data type such as int, float, char, double, short etc. § Syntax : data_type *var_name; Example : int *p; char *p; Where, * is used to denote that “p” is pointer variable and not a normal variable. Key points to remember about pointers in C: § Normal variable stores the value whereas pointer variable stores the address of the variable. § The content of the C pointer always be a whole number i.e. address. § Always C pointer is initialized to null, i.e. int *p = null. § The value of null pointer is 0. § & symbol is used to get the address of the variable. § * symbol is used to get the value of the variable that the pointer is pointing to. § If pointer is assigned to NULL, it means it is pointing to nothing. § The size of any pointer is 2 byte (for 16 bit compiler). § No two pointer variables should have the same name. § But a pointer variable and a non-pointer variable can have the same name. 1 Pointer –Initialization: Assigning value to pointer: It is not necessary to assign value to pointer. Only zero (0) and NULL can be assigned to a pointer no other number can be assigned to a pointer. Consider the following examples; int *p=0; int *p=NULL; The above two assignments are valid. int *p=1000; This statement is invalid. Assigning variable to a pointer: int x; *p; p = &x; This is nothing but a pointer variable p is assigned the address of the variable x. The address of the variables will be different every time the program is executed. Reading value through pointer: int x=123; *p; p = &x; Here the pointer variable p is assigned the address of variable x. printf(“%d”, *p); will display value of x 123. This is reading value through pointer printf(“%d”, p); will display the address of the variable x. printf(“%d”, &p); will display the address of the pointer variable p. printf(“%d”,x); will display the value of x 123. printf(“%d”, &x); will display the address of the variable x. Note: It is always a good practice to assign pointer to a variable rather than 0 or NULL. Pointer Assignments: We can use a pointer on the right-hand side of an assignment to assign its value to another variable. Example: int main() { int var=50; int *p1, *p2; p1=&var; p2=p1; } Chain of pointers/Pointer to Pointer: A pointer can point to the address of another pointer. Consider the following example. int x=456, *p1, **p2; //[pointer-to-pointer]; p1 = &x; p2 = &p1; pointer it is called chain pointer. Chain pointer must be declared with ** as in **p2. Manipulation of Pointers We can manipulate a pointer with the indirection operator „*‟, which is known as dereference operator. With this operator, we can indirectly access the data variable content. Syntax: *ptr_var; Example: #include<stdio.h> void main() { int a=10, *ptr; ptr=&a; printf(”\n The value of a is ”,a); *ptr=(*ptr)/2; printf(”The value of a is.”,(*ptr)); } Output: The value of a is: 10 The value of a is: 5 2 Pointer Expression & Pointer Arithmetic C allows pointer to perform the following arithmetic operations: A pointer can be incremented / decremented. Any integer can be added to or subtracted from the pointer. A pointer can be incremented / decremented. In 16 bit machine, size of all types[data type] of pointer always 2 bytes. Eg: int a; int *p; p++; Each time that a pointer p is incremented, the pointer p will points to the memory location of the next element of its base type. Each time that a pointer p is decremented, the pointer p will points to the memory location of the previous element of its base type. int a,*p1, *p2, *p3; p1=&a; p2=p1++; p3=++p1; printf(“Address of p where it points to %u”, p1); 1000 printf(“After incrementing Address of p where it points to %u”, p1); 1002 printf(“After assigning and incrementing p %u”, p2); 1000 printf(“After incrementing and assigning p %u”, p3); 1002 In 32 bit machine, size of all types of pointer is always 4 bytes. The pointer variable p refers to the base address of the variable a. We can increment the pointer variable, p++ or ++p This statement moves the pointer to the next memory address. let p be an integer pointer with a current value of 2,000 (that is, it contains the address 2,000). Assuming 32-bit integers, after the expression p++; the contents of p will be 2,004, not 2,001! Each time p is incremented, it will point to the next integer. The same is true of decrements. For example, p--; will cause p to have the value 1,996, assuming that it previously was 2,000. Here is why: Each time that a pointer is incremented, it will point to the memory location of the next element of its base type. Each time it is decremented, it will point to the location of the previous element of its base type. Any integer can be added to or subtracted from the pointer. > #include<conio.h> void); } /* Sum of two integers using pointers*/ #include <stdio.h> int main() { int first, second, *p, *q, sum; printf("Enter two integers to add\n"); scanf("%d%d", &first, &second); p = &first; q = &second; sum = *p + *q; printf("Sum of entered numbers = %d\n",sum); return 0; } 3 Pointers and Arrays Array name is a constant pointer that points to the base address of the array[i.e the first element of the array]. Elements of the array are stored in contiguous memory locations. They can be efficiently accessed by using pointers. Pointer variable can be assigned to an array. The address of each element is increased by one factor depending upon the type of data type. The factor depends on the type of pointer variable defined. If it is integer the factor is increased by 2. Consider the following example: int x[5]={11,22,33,44,55}, *p; p = x; //p=&x; // p = &x[0]; Remember, earlier the pointer variable is assigned with address (&) operator. When working with array the pointer variable can be assigned as above or as shown below: Therefore the address operator is required only when assigning the array with element. Assume the address on x[0] is 1000 then the address of other elements will be as follows x[1] = 1002 x[2] = 1004 x[3] = 1006 x[4] = 1008 The address of each element increase by factor of 2. Since the size of the integer is 2 bytes the memory address is increased by 2 bytes, therefore if it is float it will be increase 4 bytes, and for double by 8 bytes. This uniform increase is called scale factor. p = &x[0]; Now the value of pointer variable p is 1000 which is the address of array element x[0]. To find the address of the array element x[1] just write the following statement. p = p + 1; Now the value of the pointer variable p is 1002 not 1001 because since p is pointer variable the increment of will increase to the scale factor of the variable, since it is integer it increases by 2. The p = p + 1; can be written using increment or decrement operator ++p; The values in the array element can be read using increment or decrement operator in the pointer variable using scale factor. Consider the above example. printf(“%d”, *(p+0)); will display value of array element x[0] which is 11. printf(“%d”, *(p+1)); will display value of array element x[1] which is 22. printf(“%d”, *(p+2)); will display value of array element x[2] which is 33. printf(“%d”, *(p+3)); will display value of array element x[3] which is 44. printf(“%d”, *(p+4)); will display value of array element x[4] which is 55. /*Displaying the values and address of the elements in the array*/ #include<stdio.h> void main() { int a[6]={10, 20, 30, 40, 50, 60}; int *p; int i; p=a; for(i=0;i<6;i++) { printf(“%d”, *p); //value of elements of array printf(“%u”,p); //Address of array } getch(); } /* Sum of elements in the Array*/ #include<stdio.h> #include<conio.h> void main() { int a[10]; int i,sum=0; int *ptr; printf("Enter 10 elements:n"); for(i=0;i<10;i++) scanf("%d",&a[i]); ptr = a; /* a=&a[0] */ for(i=0;i<10;i++) { sum = sum + *ptr; //*p=content pointed by 'ptr' ptr++; } printf("The sum of array elements is %d",sum); } /*Sort the elements of array using pointers*/ #include<stdio.h> int main(){ int i,j, temp1,temp2; int arr[8]={5,3,0,2,12,1,33,2}; int *ptr; for(i=0;i<7;i++){ for(j=0;j<7-i;j++){ if(*(arr+j)>*(arr+j+1)){ ptr=arr+j; temp1=*ptr++; temp2=*ptr; *ptr--=temp1; *ptr=temp2; }}} for(i=0;i<8;i++){ printf(" %d",arr[i]); } } 4 Pointers and Multi-dimensional Arrays The array name itself points to the base address of the array. Example: int a[2][3]; int *p[2]; p=a; //p points to a[0][0] /*Displaying the values in the 2-d array*/ #include<stdio.h> void main() { int a[2][2]={{10, 20},{30, 40}}; int *p[2]; int i,j; p=a; for(i=0;i<2;i++) { for(j=0;j<2;j++) { printf(“%d”, *(*(p+i)+j)); //value of elements of array } } getch(); } 5 Dynamic Memory Allocation The process of allocating memory during program execution is called dynamic memory allocation. Dynamic memory allocation functions Related Topics Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/C-Pointer--C-Programming_6950/
CC-MAIN-2022-40
refinedweb
1,675
62.98
so this is how perl works, the programmer in his design has to differentiate and do the code appropriately. There is no difference in Perl scalar variable in case of string and integer. If the value of the variable have only numbers, it treat as integer and it required "==" for comparison. If any alphabets available in variable, it treat as, string. In all places, the second condition will work. From perlintro man page, Scalar values can be strings, integers or floating point numbers, and Perl will automatically convert between them as required. See, among others, Exploiting Perls idea of what is a number, Revealing difference in interpretation of 'number' between Perl and $other_language, is_numeric and replies thereto. Super Search is your friend. Aside: IF($value1 == $value2) will give you problems. Perl is case-sensitive. Try if($value1 == $value2). use strict; use warnings; my $str1 = "10"; my $str2 = "2"; print "$str1 lt $str2" if $str1 lt $str2; [download] Prints: 10 lt 2 [download] which may not be what you expect. If you need to be smart about which operator you should use then you may need to use a regular expression first to determine the nature of the two strings. Regexp::Common::number provides a set of expressions for dealing with numbers in various ways that may be of use to you. eq would work, however. Is there any way to differentiate between a string and integer or float for a perl scalar variable. Is this adequate? use Scalar::Util qw(looks_like_number); [download] or float then I should be doing IF($value1 == $value2) #!. my $equal = $x eq $y || eval { use warnings FATAL => qw( numeric ); $x == $y }; [download] Doesn't check if the either of the values are undef To summarize: a)everything in Perl is a string until used in a numeric context.b)Never, ever compare float values for equality in any languag!c)If you compare $num1 < $num2 and both $num1 and $num2 can be converted to a number, then is is fine! If that is not true, then that's a problem. I present here one of my subs that can compare text or numbers. This is not "perfect" and there are problems with say the "-" sign, but is a general idea. sub alpha_num_cmp { my($a, $b) = @_; if (( $a =~ m/\D/) || ($b =~ m/\D/)) { #look for a non-digit return ($a cmp $b); #if so, then use string compare } return ($a <=> $b); #otherwise straight numeric comparison }
http://www.perlmonks.org/?node_id=766000
CC-MAIN-2015-14
refinedweb
409
64.1
I've been reviewing both Ivy and Maven features related to dependency management. Maven has a dependency management section. In this section, dependencies with rules (e.g. exclusions) can be setup. This information can then be referenced by child POMs by only including org#module as a dependency. The revision and rules for that dependency are inherited from the parent. Since Maven 2.0.9, the import scope was introduced to allow dependency management information to be referenced from any POM, not just the parent. In Ivy 2.1, I use a development time ivysettings.xml file in my stream and declare revision information there. This allows all projects under development in that stream to have consistent revisions. For example: ivysettings.xml junit.rev=4.8.2 ivy.xml <dependency org="junit" name="junit" rev="${junit.rev}" /> This works nicely. Unfortunately it doesn't match Maven import or POM inheritance because I don't inherit dependency rules (excludes/includes/artifacts). Even the inheritance for revision is nice, you only need to configure org#module in the child. Ivy 2.2 introduces parent Ivy files with extends tag. If I understand correctly, the child files inherits all dependencies. Considering 1 parent file in the stream for several modules under development, inheriting all dependencies is not what I am interested in. And I don't really want to manage "the UI layer parent file" and the "domain layer parent file". I want 1 parent file to define the architecture dependencies for all modules in the stream, but let the children choose which dependencies they use. So I guess I could simply be asking, can Ivy replicate Maven's import feature? However, I don't really think Maven's POM Inheritance or import scope is what I want either. If I understand Maven's parent POM/import feature, the responsibility for conflict resolution is on the POM author. I've had this idea for a couple years now regarding multi-module builds, I want to declare all external module dependencies in a single ivy file, we'll call that the external-modules-ivy.xml. I then want Ivy to resolve that module. Because the external-modules-ivy is resolved, special rules (include/exclude/artifact) are applied and conflicts resolved. So far so good ... all normal Ivy. Then from my child ivy module under development, when I declare a dependency to an external module I want the dependency branch to be obtained from a subset of information in the external-modules-ivy report. For a visualization, consider a module section in an Ivy Report.html, and now consider "importing a resolved module section". All modules under development are guaranteed to use the same revision of external modules because Ivy performed conflict resolution before the dependency was introduced. Finally, this could lead to very optimized resolve processing for modules under development. I'm very interested to hear the thoughts from this group. Thanks, Chris Nokes --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/201106.mbox/%3C384061.46023.qm@web120401.mail.ne1.yahoo.com%3E
CC-MAIN-2019-35
refinedweb
509
50.84
Manage your java imports. java-import-wiz works together with java-classpath-registry to find the namespaces to import. While not strictly required (e.g. organizing imports will still work) it's strongly recommended. Place the cursor on the class and press ctrlalti. If the class is unambiguous it will automatically be added to the import statements. If multiple possibilities exists, java-import-wiz will let you choose from a list. You can organize imports by pressing ctrlalto. There are two settings available to control the behavior of the import organizer. com.and java.) should be separated with an extra newline. [ [ . Works together with autocomplete-java-minus to insert imports after autocompleting. When autocompleting classes they will automatically be added to the import list. Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
https://github-atom-io-herokuapp-com.global.ssl.fastly.net/packages/java-import-wiz
CC-MAIN-2021-21
refinedweb
143
52.76
charAt():- This is a predefined method of String class present in java.1ang package. It used to extract a specify character from a string by the particular index supplied by the programmer and the index must be within the length of the String. Syntax:- public char charAt(int index) Algorithm for CharAt(): step 1: read string a step 2: initialize i=0 step 3: repeat through step-5 while (i < a.length()) step 4: print i is a.charAt(i) step 5: i=i+ 1 step 6: exit Here is the Java Example for CharAt(): import java.util.*; public class charAt { static void AtChar() { System.out.print("Enter a Sentence : "); Scanner s=new Scanner(System.in); String a=s.nextLine(); for (int i=0; i<a.length();i++) { System.out.println("char " + i + " is " + a.charAt(i) ); } System.out.println(); } public static void main(String[] args) { At
http://ecomputernotes.com/java/array/charat-in-java-example
CC-MAIN-2018-17
refinedweb
147
69.07
Aside for Carlos: 'm' is short for 'module', and beefing up Python's option parsing to handle things like '--module' is a whole 'nother story. 'python -h' gives a decent description of all the options, though. Guido van Rossum wrote: >>Pros (-m over runpy.py, in order of significance as I see it): >> - easy to combine with other Python command line options > > The script approach can do this too Only by spelling out the invocation if we want something non-standard. Using the laptop I'm typing on as an example. . . Python 2.3 is the installed version, so "#!/usr/bin/env python" in a script will invoke Py 2.3 without any extra options If I (or a Python installer) were to create "/usr/local/bin/runpy.py" with that shebang line (i.e. the same as the one you posted), we have the following for profiling a sample script in my current directory with different versions of the interpreter: Installed: runpy.py profile demo.py Prompt After: python -i /usr/local/bin/runpy.py profile demo.py Alt install: python2.4 /usr/local/bin/runpy.py profile demo.py Build dir: ./python /usr/local/bin/runpy.py profile demo.py If we wanted to use the version of the script that came with the relevant version of python, those last two would become: Alt install*: python2.4 /usr/local/lib/python2.4/runpy.py profile demo.py Build dir: ./python Tools/scripts/runpy.py profile demo.py (* This is based on what happens to pydoc under 'make altinstall'. The shebang line is version agnostic, so it tries to use the Py2.3 interpreter with the Py2.4 library modules and it all falls apart. So to run a library module of the altinstall, this is how I have to do it. And this is assuming the script would get installed at all, which it may not, as it isn't actually a library module, unlike pydoc) Using -m, those become: Installed: python -m profile demo.py Prompt After: python -i -m profile demo.py Alt install: python24 -m profile demo.py Build dir: ./python -m profile demo.py >> - OS & environment independent > > So is the script -- probably more so, since the script can use > Python's OS independence layer. Paul's message goes into detail on what I meant here. The script itself is highly portable, the mechanism for invoking it really isn't. The C code in the patch is platform independent - _PyImport_FindModule (a trivial wrapper around the existing find_module in import.c) and PyRun_SimpleFileExFlags do all the heavy lifting. In fact, sans error-checking, the guts of PyRun_SimpleModuleFlags looks remarkably similar to the Python code in your script. (I initially thought the Python version might handle zip imports, while the C version didn't. However, a quick experiment shows that *neither* of them can handle zip imports. And the source code confirms it - imp.find_module and imp.load_module don't allow zip imports, and PyRun_Module in the patch doesn't allow them either. Amusingly, the current limitations of the imp module make it easier to support zip imports with the *C* version. Allowing imp.find_module to return a 4th value for the module loader would require adding an optional boolean argument to avoid breaking existing code, whereas the patch's new private C API for _PyImport_FindModule already exposes the loader argument) >> - more concise to invoke > Depends on the length of the name of the script. :-) See the examples above for what I meant with this one. For the vanilla case you're right, but as soon as we do anything slightly different, the story changes. >> - no namespace issues with naming a script > Actually, one-letter options are a much scarcer resource than script names. Well, with '-m' in place, we'd be using 17 out of the available 62 (upper & lower alpha, plus digits). The difference is that we're only competing with ourselves and the other Python interpreter authors for characters to use, whereas we're competing with all and sundry for unique executable names. (Windows isn't immune, either, given enough applications with directories on PATH. Although retaining the '.py' helps a lot there) (I have checked that Jython at least doesn't use '-m' for anything. I don't know about other interpreters) >> - C API for those embedding python > And who needs that? There's a reason this one was last on my list :) > Additional Pros for using a script: > - less code to maintain Once we factor in the additional packaging requirements to make the script as easy to use on all target platforms as -m would be, I think this one is at least arguable (script + packaging vs option-parsing and C function). > - can work with older Python versions > - shows users how to do a similar thing themselves with additional > features Certainly, dropping a version of this script into Tools/scripts in CVS couldn't hurt, regardless of whether or not '-m' gets adopted. The same would go for an entry in the Python cookbook. > (e.g. a common addition I expect will be to hardcode a > personal sys.path addition) Except that this feature isn't so much for your *own* scripts, as it is for installed modules that are also usable as scripts (like profile and pdb). In the former case, you *know* where those scripts are (for me ~/script_name usually does the trick on *nix). In the latter case, though, it'd be nice to be able to use these things easily 'out of the box'. For those who want to tweak the search behaviour, all the usual environment variables apply (PYTHONPATH in particular). Heck, there's nothing to stop someone from doing something like the following if they really want to: python -m my_run_module_script some_other_module The command line interface is one of the major holdouts in Python where we really need to care where the source file for a module lives. It'd be nice to change that. Cheers, Nick. -- Nick Coghlan Brisbane, Australia
https://mail.python.org/pipermail/python-dev/2004-October/049236.html
CC-MAIN-2016-44
refinedweb
1,005
65.22
09 January 2012 03:14 [Source: ICIS news] By Quintella Koh SINGAPORE (ICIS)--2-ethyhexanol (2-EH) prices in Asia may come under considerable pressure in 2012, as China’s 2-EH import requirement is estimated to decline in the wake of new world-scale capacities that the country is building, industry sources said. Several regional producers and traders said that as ?xml:namespace> Asian 2-EH sellers typically hold a stronger bargaining power, as the petrochemical product is net deficit in “Sellers will however experience a shift of bargaining power moving into 2012 and 2013, as According to Chemease, an ICIS service in A northeast Asian producer said based on its company’s internal data, it estimates that The producer estimates that ICIS-Chemease data shows that in 2011, 2-EH is used primarily in the production of dioctyl phthalate (DOP). DOP is added as a plasticiser in the production of polyvinyl chloride (PVC). PVC is used heavily in the manufacture of construction materials such as door and window frames, pipes and cables. In 2011, Demand from the plasticisers industry will account for around 87% of the demand for 2-EH, while the demand from 2-ethylhexyl acrylate (2-EHA) will account for approximately 10% of the demand for 2-EH. A bulk of new capacity of 2-EH will be added in is Shandong Lanfan Chemical is expected to start up its 140,000 tonne/year 2-EH plant this year. Meanwhile, Shandong Hualu Hengsheng Group, Shandong Luxi Chemical Co and Xingxia Petrochina are scheduled to start up new 2-EH plants in the period 2012 to 2013. These three plants can each produce 140,000 tonne/year of 2-EH. Other provinces that will house new 2-EH capacities include In Heilongjiang, Daqing Petrochemical Company is scheduled to complete its 2-EH expansion this year. The company aims to increase its 2-EH capacity to 125,000 tonne/year from 50,000 tonne/year, ICIS-Chemease data shows. In Sichuan, China National Petroleum Corporation (CNCP) aims to bring onstream a new 80,000 tonne/year 2-EH plant in 2013. Asian producers had grappled with weak profit margins for most of last year. In 2011, Asian producer saw their profit margins plummet from an annual peak of $394.70/tonne (€312/tonne) on 18 February to -$43.95/tonne on 2 December. (please see chart below) For each tonne of 2-EH produced, 800kg of propylene and 200kg of naphtha is required. In addition, an average conversion cost of $250/tonne is added onto the production cost. Asian producers said 2-EH’s performance at the start of this year appears to be a harbinger of tough times to come. On 6 January, Asian producers started off the year with seeing profit margins of $64/tonne, a far cry from 2011, when they started off the year with profit margins of $388.90/tonne. “Producers usually make very good profits selling 2-EH as the product is structurally very tight in As testament to the tough times producers are anticipating, there were no new projects announced elsewhere in “No one is willing to take the risk to invest in new projects or expand on their capacities now. Most producers will probably be in a wait-and-see mode in 2012 and 2013 to ascertain how the situation in ?xml:namespace> ($1 = €0.79) Please visit the complete ICIS plants and projects database For more information on 2-EH,
http://www.icis.com/Articles/2012/01/09/9520465/outlook-12-asia-2-eh-prices-seen-bearish-as-chinas-import-drops.html
CC-MAIN-2014-15
refinedweb
581
57.61
I am trying to quickly send serial commands to an arduino via pyserial in GHpython on mac. I have achieved this but at a very slow speed. I realize firefly has this function, but us mac users cannot run the program. Here is a video of my script turning off and on an led based on a gh boolean toggle: In order for this to work I had to add a 2 second wait time before sending the serial data. I think this is because the ghpython component is opening a serial connection each time the script is updated. Does anyone know of a way to speed this transfer up? Here is my ghpython code: import rhinoscriptsyntax as rs import serial import time arduino = serial.Serial('/dev/tty.usbmodem14101', 9600) c = str(y) time.sleep(2) arduino.write(bytes(c + '\n','utf-8')) here is my arduino code: String inByte; void setup(){ pinMode(13, OUTPUT); Serial.begin(9600); } void loop() { if (Serial.available()){ inByte = Serial.readStringUntil('\n'); if(inByte.toInt() == 0){ digitalWrite(13, LOW); } if(inByte.toInt() == 1){ digitalWrite(13, HIGH); } } } Thanks, Tristan
https://discourse.mcneel.com/t/speeding-up-pyserial-arduino-communication-on-mac/94076
CC-MAIN-2022-27
refinedweb
183
59.09
Sol Sol is a C++ library binding to Lua. It currently supports Lua 5.2. Sol aims to be easy to use and easy to add to a project. At this time, the library is header-only for easy integration with projects.data<vars>("vars", "boop", &vars::boop); lua.script("beep = vars.new()\n" "beep.boop = 1"); assert(lua.get<vars>("beep").boop == 1); } More examples are given in the examples directory. Features - Supports retrieval and setting of multiple types including std::string. - Lambda, function, and member function bindings are supported. - Intermediate type for checking if a variable exists. - Simple API that completely abstracts away the C stack API. operator[]-style manipulation of tables is provided. - Support for tables. Supported Compilers Sol makes use of C++11 features. GCC 4.7 and Clang 3.3 or higher should be able to compile without problems. However, the officially supported compilers are: - GCC 4.8.0 - GCC 4.9.0 - Clang 3.4 Visual Studio 2013 with the November CTP could possibly compile it, despite not being explicitly supported. The last version that Visual Studio 2013 was supported was on tag v1.1.0. Anything after that is wishful thinking. In order to retrieve that tagged version, just do git checkout v1.1.0. Caveats Due to how this library is used compared to the C API, the Lua Stack is completely abstracted away. Not only that, but all Lua errors are thrown as exceptions instead. This allows you to handle the errors gracefully without being forced to exit. It should be noted that the library itself depends on lua.hpp to be found by your compiler. It uses angle brackets, e.g. #include <lua.hpp>. License Sol is distributed with an MIT License. You can see LICENSE.txt for more info.
https://devhub.io/repos/Rapptz-sol
CC-MAIN-2020-29
refinedweb
300
63.05
In this section, we are going to look at the things which make up a program, using the make utility which helps to compile programs. The last section discusses standardisation. You need to understand the steps which are taken on the way from C source code to an executable. These are all done automatically by your compiler, but your compiler actually takes steps: And an executable is the result. Suppose you write a program main.c which runs in the background and does stuff on regular times. This program is completely in the background and has no terminal to which it can write error messages. So you write a couple of functions which open and close a logfile and write error messages in this logfile. The regular procedure is then to write a so-called header file. In this header file (which ends with .h), all functions are declared. The purpose of log.h is to let the compiler (not the linker) check whether you use the functions in log.c in the right manner. Besides that, it is a central area to put your structs and typedefs in. It is also used by commercial developers to protect their code. They give you the header files and the compiled library. So now we have the files main.c, log.c and log.h. Let us have a look at their layout (contents are not important at this point): /* main.c */ #include <stdio.h> #include %lt;stdlib.h> #include "log.h" int main(void) { log_open("logfile.log"); do { /* do stuff */ if (error == TRUE) { log_write("error doing stuff"); } } while(1); return 0; } /* log.c */ #include <stdio.h> #include <stdlib.h> #include "log.h" void log_open(const char* logfile_name) { /* open the log */ } void log_write(const char* error_message) { /* append the error message to the logfile */ } void log_close() { /* close this logfile */ } /* log.h */ #ifndef _LOG_H_ #def _LOG_H_ void log_open(const char*); void log_write(const char*); void log_close(void); #endif The precompiler directives (the commands starting with a #) make sure the header file is only included once. The first line says "if the macro _LOG_H_ is not defined, do everything below until we encounter the #endif". The second line defines this macro _LOG_H_. The utility make its job is to make compiling and linking easier and smarter. Suppose a large program consisting of several source files is compiled every time a change is made. That would be kind of a waste, because unchanged source files do not need to be recompiled. This is where make comes in: when you run it, it looks at the so-called makefile. This file describes what should be compiled. And when recompiling, make checks whether the source file of each objectfile has been changed. If not, then it will skip the compiling work. In this way, large projects do not need to be recompiled every time, in this way saving resources. Now we will look at the makefile, which is best described by looking at using make with the Server class (see the Sockets section). Below is the makefile of the Server class (the real makefile is not numbered, this is just for easy referencing): 1) WARN = -Wall -Wstrict-prototypes 2) FLAGS = -ansi -pedantic -O2 $(WARN) -D_REENTRANT 3) CC = gcc 4) LIBS = -lpthread 5) 6) all: testServer 7) 8) testServer: testServer.o SimpleServer.o Server.o 9) $(CC) -o testServer testServer.o SimpleServer.o Server.o $(LIBS) $(FLAGS) 10) 11) testServer.o: testServer.cpp 12) $(CC) -c testServer.cpp $(FLAGS) 13) 14) SimpleServer.o: SimpleServer.cpp 15) $(CC) -c SimpleServer.cpp $(FLAGS) 16) 17) Server.o: Server.cpp 18) $(CC) -c Server.cpp $(FLAGS) 19) 20) clean: 21) echo Cleaning up files . . . 22) # WATCH OUT WITH THIS LINE!! 23) rm testServer *.o core *.bak *.aux *.log -f The first four lines set some variables. This makes changing the makefile for another platform easy. In the first two lines, flags are defined for compiling. The third line defines the compiler. The fourth line defines whether any non-standard libraries should be included. The lines after the variables are called rules. Each rule has two lines. The first line lists which should be done before the second line is executed. This list forms the dependencies. Let us jump ahead and look at line 8. Here it says that in order to execute line 9, testServer.o, SimpleServer.o and Server.o should be present. In other words, testServer is dependent on testServer.o, SimpleServer.o and Server.o. make looks for a rule to make testServer.o and jumps to line 11. In words, it says here: "in order to make testServer.o, the file testServer.cpp should be present". Well, it is. So make executes the code on line 12. After that, the other two dependencies of line 8 should be done. So lines 15 and 18 are executed. Then, when all dependencies listed on line 8 are done, line 9 is executed. Note line 6. When make is executed without any arguments, the first line is executed. Here it just says: testServer. This tells make to jump to line 8. Now note line 20: no dependency is listed. When make clean is typed, line 21 is executed. Handy for cleaning core dumps and old object files! Very important: every line which should be executed (lines 9, 12, 15, 18, 21 and 23 in the example above) must begin with the Tab character! When portability is an issue, it is important to adhere to the two standards ANSI/ISO C and POSIX. If you want to use a certain library function, check your man page for a certain standard. When the function does not comply to a standard, make a wrapper function to make porting easy. What is called ANSI C, is committed by the American National Standards Institute in standard ANSI X3.159-1989. The International Standardization Organisation has copied that standard as ISO/IEC 9899. In 1995 and 1996, ISO added three documents containing minor additions and corrections. When the term ANSI C is used in this toolkit, we also take into account these three recent ISO additions. The ANSI C standard commits syntax, some standard libraries and the compiler is subject to certain demands (for example, what should be an error and what should raise a warning). The most important thing for the programmer is that he may not use identifiers that are used in the standard libraries. To compile according to all mentioned demands, the gcc compiler accepts the flags -ansi and -pedantic. The IEEE POSIX family of standards is a superset of the ANSI C standard. Besides ANSI C, it commits demands referring to the more low-level functions. POSIX is divided in two parts: POSIX.1 (IEEE std 1003.1-1990) describes the process environment, files and directories, system databases, terminal I/O and archive formats. The later 1996 version adds realtime extensions and the pthreads system API. POSIX.2 (IEEE std 1003.2-1992) is called the POSIX Shell and Utilities Standard; it talks about the shell functionality and more than one hundred utilities which should be present and conform to certain demands. POSIX.1 is very important for compatibility; if a system call is not POSIX compliant, most of the time it's best to use a POSIX alternative. (POSIX stands for Portable Operating System Interface for Computer Environments).
https://www.vankuik.nl/Makefiles
CC-MAIN-2021-04
refinedweb
1,226
77.23
#760764, was opened at 2003-06-25 22:52 Message generated for change (Comment added) made by joergl You can respond by visiting: Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Gert Ingold (gertingold) Assigned to: Jörg Lehmann (joergl) Summary: TFMError "width_index should not be zero" Initial Comment: Hello, I am trying to use a commercial font (Garamond Standard from Softmaker) with PyX 0.3.1. Unfortunately, a TFMError "width_index should not be zero" is raised. A closer look at the TFM files shows indeed that CHARWD is zero for certain characters, namely "17x in the T1 encoding (8t) and "17x and "1Fx in the TS1 encoding (8c). Character "17x is defined as /compoundwordmark while "1Fx is /ffl. The font tables in CTAN:fonts/utilities/fontinst-prerelease/doc/talks/et99-font-tables.pdf don't show glyphs for compoundwordmark in the T1 and TS1 encodings and for the ffl ligature in the TS1 encoding (of course, the ffl ligature is present in the T1 encoding). The same problem arises if I use the package mathptmx while the use of the palatino package leads to: After parsing this message, the following was left: *(/usr/TeX/texmf/tex/latex/psnfss/ot1phv.fd) If the described effect is caused by PyX checking the CHARWD's of all characters, this should, in my opinion, be considered a bug. Best regards, Gert ---------------------------------------------------------------------- >Comment By: Jörg Lehmann (joergl) Date: 2003-08-26 13:39 Logged In: YES user_id=390410 This problem should be fixed in PyX 0.4. Thanks, Jörg ---------------------------------------------------------------------- Comment By: Gert Ingold (gertingold) Date: 2003-06-27 22:04 Logged In: YES user_id=809523 One more comment concerning the workaround proposed by André: Things become of course more complicated if reencoding takes place (e.g. when the map file contains the statement "TeXBase1Encoding ReEncodeFont"). What was meant to be a ß (germandbls), i.e. character 255 in EC, will turn out as ydieresis in TeXBase1. In order to obtain a ß, one would have to ask for a \SS=Germandbls=character 223. If this is not done, the glyph is not even extracted from the pfb-file so that changing the eps-file would not be an option anymore. ---------------------------------------------------------------------- Comment By: Gert Ingold (gertingold) Date: 2003-06-27 21:04 Logged In: YES user_id=809523 The changes to text.py fixed the original problem. However, there is still a problem (basically already described by André in his message of June 26). Apparently, the map-files are not evaluated so that replacement fonts are used. This behavior can already be obtained with the standard TeX fonts and \usepackage[T1]{fontenc}. The solution suggested by André is not very practical so that the question mark in "To be implemented ... ?!" should be replaced by (at least two) exclamation marks. ---------------------------------------------------------------------- Comment By: André Wobst (wobsta) Date: 2003-06-26 10:43 Logged In: YES user_id=405853 Fix looks fine to me. I can get marvosym running by that (which didn't work previously due to the same bug). Consider this minimal example: from pyx import * text.set(mode="latex") text.preamble("\usepackage{marvosym}") c = canvas.canvas() c.text(0, 0, "\EUR") c.writetofile("test") Unfortunately you have to do two things for getting this working: 1. You have to copy marvosym.pfb to fmvr8x.pfb (the name used intenally) 2. You have to modify the PostScript output test.eps, namely you have to change the selectfont from /FMVR8X 9.962640 selectfont to /Martin_Vogels_Symbole 9.962640 selectfont The information can be found in marvosym.map To be implemented ... ?! ---------------------------------------------------------------------- Comment By: Jörg Lehmann (joergl) Date: 2003-06-26 10:31 Logged In: YES user_id=390410 Please check whether the changes made in the CVS head fix this problem. Jörg ---------------------------------------------------------------------- Comment By: André Wobst (wobsta) Date: 2003-06-26 08:52 Logged In: YES user_id=405853 I'm just considering the problem of loading fd-files triggered by TeX when processing some expressions. By default, those messages are errors in PyX 0.3.1. I've just checked in a special handler for font description file fd-file loading is not considered to be an error anymore. Beside that, your report about tfm-file handling is still open. PyX might be just to strict in that sence. ---------------------------------------------------------------------- You can respond by visiting: Bugs item #795271, was opened at 2003-08-26 11:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gert Ingold (gertingold) Assigned to: Nobody/Anonymous (nobody) Summary: wrong token in map file Initial Comment: With PyX0.4 I got the following error message: Traceback (most recent call last): File "./plot", line 2, in ? from pyx import * File "/usr/lib/python2.2/site-packages/pyx/canvas.py", line 41, in ? import attrlist, base, bbox, helper, path, unit, prolog, text, trafo, version File "/usr/lib/python2.2/site-packages/pyx/text.py", line 504, in ? fontmap = readfontmap(["psfonts.map"]) File "/usr/lib/python2.2/site-packages/pyx/text.py", line 497, in readfontmap fontmapping = FontMapping(line) File "/usr/lib/python2.2/site-packages/pyx/text.py", line 454, in __init__ raise RuntimeError("wrong syntax in font catalog file 'psfonts.map'") RuntimeError: wrong syntax in font catalog file 'psfonts.map' The offending line in psfonts.map seems to be: Cheq Cheq <cheq.ps Maybe PyX could treat this line more graciously even if the syntax should be incorrect (??), in particular since the cases where this chessboard font is needed are certainly rare. PyX0.4 will stop regardless of whether this font is needed or not. ---------------------------------------------------------------------- You can respond by visiting: Hi, 'just got
https://sourceforge.net/p/pyx/mailman/pyx-devel/?style=flat&viewmonth=200308
CC-MAIN-2017-39
refinedweb
946
65.93
Basics of Leo¶ “Edward... you’ve come up with perhaps the most powerful new concept in code manipulation since VI and Emacs.”—David McNab This chapter introduces Leo’s basic operations for creating and changing outlines. Commands can be executed using keystrokes, or by name. Contents - The Basics of Leo - Command names - Leo’s main window - File operations & sessions - Switching focus - Operations on nodes - Selecting outline nodes - Moving the cursor in text panes - The minibuffer & completions - Finding & replacing text - Undoing and redoing changes - Getting help - Leo directives - Configuring Leo - Plugins - Creating external files with @file and @all - Summary Command names¶ Every Leo command has a command name. In this document, keystrokes that invoke a command will be followed by the command name in parentheses. For example, Ctrl-S (save-file) saves a Leo file. Alt-X (full-command) - Executes any other command by typing its full name. For full details see The minibuffer & completions. Leo’s main window¶ Here is a slightly reduced screenshot of Leo’s main window: Leo’s main window consists of an icon area just below the menus, an outline pane at the top left, a log pane at the top right, a body pane at the bottom left, and an optional viewrendered pane at the bottom right. The minibuffer and status line lie at the bottom of the main window. The log pane contains several tabs. The Log tab shows messages from Leo, the Find Tab shows the status of Leo’s Find/Replace commands. Other tabs may also appear in the log pane: The Spell Tab controls Leo’s spell-checking. The Completion Tab shows available typing completions. Leo stores all data in nodes. Nodes have headlines, shown in the outline pane, and body text. The body pane shows the body text of the presently selected node, the node whose headline is selected in the outline pane. Headlines have an icon box indicating a nodes status. For example, the icon box has a black border when the node has been changed. File operations & sessions¶ Here are Leo’s basic file commands: Ctrl-N (new) - Creates a new outline in a new tab. Ctrl-O (open-outline) - Opens an existing .leo file. Ctrl-S (save-file) - Saves the outline. Ctrl-Q (exit-leo) - Exits Leo. Leo will prompt you to save any unsaved outlines. A session specifies a list of tabs (.leo files) that Leo opens automatically when Leo first starts. When the --session-save and --session-restore command-line options are in effect, Leo will save session data on exit and will reload outlines when Leo restarts. For full details, see Using sessions in Leo’s Users Guide. Switching focus¶ Here’s how to switch focus without using the mouse: Alt-0 (vr-toggle) - Hides or shows the viewrendered pane. Alt-T (focus-to-tree) - Puts focus in the outline pane, regardless of focus. Ctrl-T (toggle-active-pane) - Toggles focus between the outline and body panes. Ctrl-Tab (tab-cycle-next) - Switches between outline tabs. You may open multiple Leo outlines in different tabs within the same main window. Ctrl-G (keyboard-quit) - Puts focus in the body pane. More effective than hitting Alt-Tab twice. Operations on nodes¶ Ctrl-I or Insert (insert-node) - Inserts a new node into the outline. Ctrl-H (edit-headline) - Begins editing the headline of the selected node. Return - When focus is in the outline pane, <Return>ends editing (end-edit-headline)or switches focus to the body pane. Ctrl-Shift-C (copy-node) - Copies the outline and all it’s descendants, placing the node on the clipboard. Ctrl-Shift-X (cut-node) - Cuts the outline and all its descendants, placing the node on the clipboard. Ctrl-Shift-V (paste-node) - Pastes a node (and its descendants) from the clipboard after the presently selected node. Ctrl-M (mark) - Toggles the mark on a node. Marked nodes have a vertical red bar in their icon area. Ctrl-} (demote) - Makes all following siblings of a node children of the node. Use demoteto “gather” nodes so they can all be moved with their parent. Ctrl-{ (promote) - Makes all the children of a node siblings of the node. Use demoteto “scatter” the nodes after moving their parent. Selecting outline nodes¶ You may select, expand and contract outline nodes with the mouse as usual, but using arrow keys is highly recommended. When focus is in the outline pane, plain arrows keys change the selected node: Right-arrow (expand-and-go-right) - Expands a node or selects its first child. Left-arrow (contract-or-go-left) - Contracts a node if its children are visible, and selects the node’s parent otherwise. Up-arrow (goto-prev-visible) - Selects the previous visible outline node. Down-arrow (goto-next-visible) - Selects the next visible outline node. Regardless of focus, Alt-arrow select outline nodes: Alt-Home (goto-first-visible-node) - Selects the first outline node and collapses all nodes. Alt-End (goto-last-visible-node) - Selects the last visible outline node and collapses all nodes except the node and its ancestors. Alt-arrow keys - Select the outline pane, and then act just like the plain arrow keys when the outline pane has focus. Moving the cursor in text panes¶ When focus is in any of Leo’s text panes (body pane, log pane, headlines), Leo works like most text editors: Plain arrowkeys move the cursor up, down, left or right. Ctrl-LeftArrowand Ctrl-RightArrowmove the cursor by words. Homeand Endmove the cursor to the beginning or end of a line. Ctrl-Homemoves the cursor to the beginning of the body text. Ctrl-Endmoves the cursor to the end of the body text. PageDownand PageUpmove the cursor up or down one page. Note: As usual, adding the Shift key modifier to any of the keys above moves the cursor and extends the selected text. The minibuffer & completions¶ Leo’s minibuffer appears at the bottom of Leo’s main window. You use the minibuffer to execute commands by name, and also to accumulate arguments to commands. Alt-X (full-command) puts the cursor in the minibuffer. You could type the full command name in the minibuffer, followed by the <Return> key to invoke the command, but that would be way too much work. Instead, you can avoid most typing using tab completion. With tab completion, there is no need to remember the exact names of Leo’s commands. For example, suppose you want to print out the list of Leo’s commands. You might remember only that there are several related commands and that they all start with “print”. Just type <Alt-X>pri<Tab> You will see print- in the minibuffer. This is the longest common prefix of all the command names that start with pri. The Completion tab in the log pane shows all the commands that start with print-. Now just type c<Tab> You will see the print-commands command in the minibuffer. Finally, <Return> executes the print-commands command. The output of the print-commands command appears in the commands tab, and focus returns to the body pane. Very Important: Leo has hundreds of commands, but because of tab completion you do not have to remember, or even know about any of them. Feel free to ignore commands that you don’t use. Opening files using filename completion file-open-by-name - Prompts for a filename. This command completes the name of files and directories as in command completion. As a result, this command can be very fast. You may want to bind this command to Ctrl-Oinstead of the default open-outlinecommand. Command history: Ctrl-P (repeat-complex-command - Repeats the last command entered by name in the minibuffer. UpArrow(in the minibuffer) - Moves backward through command history. The first UpArrowis the same as Ctrl-P. DownArrow(in the minibuffer) - Moves forward through command history. Summary: <Return>executes the command. <Tab>shows all valid completions. <BackSpace>shows more alternatives. Ctrl-Gexits the minibuffer and puts focus in the body pane. UpArrowand DownArrowin the minibuffer cycle through command history. Finding & replacing text¶ This section explains how to use Leo’s standard search/replace commands. Note: you can also use the Nav Tab (in the Log pane) to search for text. Ctrl-F (start-search) shows the Find Tab and puts the focus in the text box labeled Find:. Aside: You can select radio buttons and toggle check boxes in the Find Tab with Ctrl-Alt keys. The capitalized words of the radio buttons or check boxes indicate which key to use. For example, Ctrl-Alt-X (toggle-find-regex-option) toggles the regeXp checkbox. After typing Ctrl-F, type the search string, say def, in the text box. Start the find command by typing <Return>. But suppose you want to replace def with foo, instead of just finding def. Just type <Tab> before typing <Return>. Focus shifts to the text box labeled Replace:. Finally, type <Return> to start the find-next command. When Leo finds the next instance of def, it will select it. You may now type any command. The following are most useful: Ctrl-minus (replace-then-find)replaces the selected text. F3 (find-next)continues searching without making a replacement. F2 (find-previous)continues the search in reverse. Ctrl-G (keyboard-quit)ends the search. Undoing and redoing changes¶ Leo has unlimited undo–Leo remembers all changes you make to outline structure or the contents of any node since you restarted Leo. Ctrl-Z (undo) - Undoes the last change. Another Ctrl-Z undoes the previous change, etc. Ctrl-Shift-Z (redo) - Undoes the effect of the last undo, etc. The first two entries of the Edit menu show what the next undo or redo operation will be. Getting help¶ F1 (help) - Shows a help message appears in the viewrendered pane. Alt-0 (vr-toggle) hides or shows this pane. F11 (help-for-command) - Shows the documentation for any Leo command. F11prompts for the name of a Leo command in the minibuffer. Use tab completion to see the list of all commands that start with a given prefix. F12 (help-for-python) - Shows the documentation from Python’s help system. Typing completion is not available: type the full name of any Python module, class, function or statement. These commands clarify which settings are in effect, and where they came from: print-bindings print-settings These commands discuss special topics: help-for-abbreviations help-for-autocompletion help-for-bindings help-for-creating-external-files help-for-debugging-commands help-for-drag-and-drop help-for-dynamic-abbreviations help-for-find-commands help-for-minibuffer help-for-regular-expressions help-for-scripting help-for-sessions Using Leo, especially for programming, requires some learning initially. Please feel free to ask for help at any time. Leo directives¶ Directives control Leo’s operations. Directives start with @ in the leftmost column. Directives may appear either in headlines or body text. When people speak of an @x node, they are implying that the headline starts with @x. If a node contains an @x directive (in the body pane), they will usually say something like, “a node containing an @x directive”. Directives apply until overridden by the same (or related) directive in a descendant node. Some commonly used general-purpose directives: @color @killcolor @nocolor These control whether to syntax color text. Nodes may contain multiple color directives. Nodes containing multiple color directives do not affect coloring of descendant nodes. @language python @language c @language rest # restructured text @language plain # plain text: no syntax coloring. These control which language to use when syntax coloring text. @pagewidth 100 Sets the page width used when formatting paragraphs. @tabwidth -4 @tabwidth 8 Sets the width of tabs. Negative tab widths cause Leo to convert tabs to spaces and are highly recommended for Python programming. @nowrap @wrap These enable or disable line wrapping in the body pane. Configuring Leo¶ Leo has a flexible (perhaps too flexible) configuration system. It’s best to use this flexibility in a restricted way as follows: - The file leo/config/leoSettings.leo contains Leo’s default global settings. Don’t change this file unless you are one of Leo’s developers. - The file ~/myLeoSettings.leo contains your personal settings. Leo will not create this file automatically: you should create it yourself. Settings in myLeoSettings.leo override (or add to) the default settings in leoSettings.leo. - Any other .leo file may also contain local settings. Local settings apply only to that file and override all other settings. It’s best to use local settings sparingly. As a result, settings may vary from one Leo file to another. This can be confusing. These two commands can help: print-settingsshows each setting and where it came from. print-bindingsshows each key binding and where it came from. Important: within any file, settings take effect only if they are contained in an @settings tree, that is, are descendants of a node whose headline is @settings. Nodes outside @settings trees do not affect settings in any way. Within @settings trees, you specify boolean settings with @bool nodes, string settings with @string nodes, menus with @menus and @menu nodes, etc. For exact details, please do study leoSettings.leo. You can open either leoSettings.leo or myLeoSettings.leo from the Help menu. Within leoSettings.leo: - The node About this fileexplains about settings. - The node Candidates for settings in myLeoSettings.leohighlights the settings you are most likely to want to customize. Plugins¶ Leo plugins are Python programs that extend what Leo can do. Plugins reside in the leo/plugins folder. Enable plugins by adding their file names in @enabled-plugins nodes in an @settings tree. The @enabled-plugins node in leoSettings.leo enables the recommended plugins. Programmers have contributed dozens of plugins, including: - bookmarks.py manages and shows bookmarks. - contextmenu.py shows a context menu when you right-click a headline. - mod_scripting.py supports @button and @command nodes. - quicksearch.py Adds Nav tab for searching. - todo.py provides to-do list and simple project-management capabilities. - valuespace.py adds outline-oriented spreadsheet capabilities. - viewrendered.py creates the rendering pane and renders content in it. Creating external files with @file and @all¶ Leo stores outline data on your file system in .leo files. Rather than storing all your data in the .leo file, you may store parts of your outline data in external files, files on your file system. @file nodes create external files. @file nodes have headlines starting with @file followed by a file name: @file leoNodes.py @file ../../notes.text Leo reads external files automatically when you open Leo outline, and writes all dirty (changed) external files when you save any Leo outline. The @all directive tells Leo to write the @file tree (the @file node and all its descendants) to the external file in outline order, the order in which the nodes appear in the outline pane when all nodes are expanded. Non-programmers will typically use the @all directive; programmers typically use the @others directive, as discussed in the programming tutorial. The @all directive may appear anywhere in the body text of the root @file node. The @all directivive is designed for “catch-all” files, like todo.txt or notes.txt or whatever. Such files are assumed to contain a random collection of nodes, so there is no language in effect and no real comment delimiters. When writing @file nodes, Leo adds sentinel comments to external files. Sentinels embed Leo’s outline structure into external files. If you don’t want sentinels in your sources, skip head to the Using @clean nodes, part of Leo’s programming tutorial. Summary¶ Every command has a name. You may execute any command by name from the minibuffer. Many commands are bound to keystrokes. You may bind multiple keystrokes to a single command and change bindings to your taste. Note: All Ctrl-<number> keys and most Ctrl-Shift-<number> keys are unbound by default. You may bind them to whatever commands you like. Leo has commands to create, change and reorganize outlines. Within the body pane, Leo uses standard key bindings to move the cursor. Ctrl-F starts the find command. Use the minibuffer to complete the command. Leo’s configuration files specify all settings, including key bindings. Leo directives control how Leo works. @all creates an external file from all the nodes of an outline. Enable plugins using @enabled-plugins nodes in an @settings tree.
http://pythonic.zoomquiet.top/data/20160416120854/index.html
CC-MAIN-2019-22
refinedweb
2,739
67.76
I find that some people spend way too much time doing "meta" programming. I prefer to use someone's framework rather than (a) write my own or (b) extend theirs. I prefer to learn their features (and quirks). Having disclaimed an interest in meta programming, I do have to participate in capacity planning. Capacity planning, generally, means canvassing applications to track down disk storage requirements. Back In The Day Back in the day, when we wrote SQL by hand, we were expected to carefully plan all our table and index use down to the kilobyte. I used to have really sophisticated spreadsheets for estimating -- to the byte -- Oracle storage requirements. Since then, the price of storage has fallen so far that I no longer have to spend a lot of time carefully modelling the byte-by-byte storage allocation. The price has fallen so fast that some people still spend way more time on this than it deserves. Django ORM The Django ORM obscures the physical database design. This is a good thing. For capacity planning purposes, however, it would be good to know row sizes so that we can multiply by expected number of rows and cough out a planned size. Here's some meta-data programming to extract Table and Column information for the purposes of size estimation. import sys from django.conf import settings from django.db.models.base import ModelBase class Table( object ): def __init__( self, name, comment="" ): self.name= name self.comment= comment self.columns= {} def add( self, column ): self.columns[column.name]= column def row_size( self ): return sum( self.columns[c].size for c in self.columns ) + 1*len(self.columns) class Column( object ): def __init__( self, name, type, size ): self.name= name self.type= type self.size= size sizes = { 'integer': 4, 'bool': 1, 'datetime': 32, 'text': 255, 'smallint unsigned': 2, 'date': 24, 'real': 8, 'integer unsigned': 4, 'decimal': 40, } def get_size( db_type, max_length ): if max_length is not None: return max_length return sizes[db_type] def get_schema(): tables = {} for app in settings.INSTALLED_APPS: print app try: __import__( app + ".models" ) mod= sys.modules[app + ".models"] if mod.__doc__ is not None: print mod.__doc__.splitlines()[:1] for name in mod.__dict__: obj = mod.__dict__[name] if isinstance( obj, ModelBase ): t = Table( obj._meta.db_table, obj.__doc__ ) for fld in obj._meta.fields: c = Column( fld.attname, fld.db_type(), get_size(fld.db_type(), fld.max_length) ) t.add( c ) tables[t.name]= t except AttributeError, e: print e return tables if __name__ == "__main__": tables = get_schema() for t in tables: print t, tables[t].row_size() This shows how we can get table and column information without too much pain. This will report an estimated row size for each DB table that's reasonably close. You'll have to add storage for indexes, also. Further, many databases leave free space within each physical block, making the actual database much larger than the raw data. Finally, you'll need extra storage for non-database files, logs and backups.
http://slott-softwarearchitect.blogspot.com/2009/10/django-capacity-planning-reading-meta.html
CC-MAIN-2018-26
refinedweb
496
60.51
LeakCanary: Detect all memory leaks! A memory leak detection library for Android and Java. Written by Pierre-Yves Ricau. java.lang.OutOfMemoryError at android.graphics.Bitmap.nativeCreate(Bitmap.java:-2) at android.graphics.Bitmap.createBitmap(Bitmap.java:689) at com.squareup.ui.SignView.createSignatureBitmap(SignView.java:121) Nobody likes OutOfMemoryError crashes In Square Register, we draw the customer’s signature on a bitmap cache. This bitmap is the size of the device’s screen, and we had a significant number of out of memory (OOM) crashes when creating it. We tried a few approaches, none of which solved the issue: - Use Bitmap.Config.ALPHA_8 (a signature doesn’t need color). - Catch OutOfMemoryError, trigger the GC and retry a few times (inspired from GCUtils). - We didn’t think of allocating bitmaps off the Java heap. Lucky for us, Frescodidn’t exist yet. We were looking at it the wrong way The bitmap size was not a problem. When the memory is almost full, an OOM can happen anywhere. It tends to happen more often in places where you create big objects, like bitmaps. The OOM is a symptom of a deeper problem: memory leaks. What is a memory leak? Some objects have a limited lifetime. When their job is done, they are expected to be garbage collected. If a chain of references holds an object in memory after the end of its expected lifetime, this creates a memory leak. When these leaks accumulate, the app runs out of memory. For instance, after Activity.onDestroy() is called, the activity, its view hierarchy and their associated bitmaps should all be garbage collectable. If a thread running in the background holds a reference to the activity, then the corresponding memory cannot be reclaimed. This eventually leads to an OutOfMemoryError crash. Hunting memory leaks Hunting memory leaks is a manual process, well described in Raizlabs’ Wrangling Dalvik series. Here are the key steps: - Learn about OutOfMemoryError crashes via Bugsnag, Crashlytics, or the Developer Console. - Attempt to reproduce the problem. You might need to buy, borrow, or steal the specific device that suffered the crash. (Not all devices will exhibit all leaks!) You also need to figure out what navigation sequence triggers the leak, possibly by brute force. - Dump the heap when the OOM occurs (here’s how). - Poke around the heap dump with MAT or YourKit and find an object that should have been garbage collected. - Compute the shortest strong reference path from that object to the GC roots. - Figure out which reference in the path should not exist, and fix the memory leak. What if a library could do all this before you even get to an OOM, and let you focus on fixing the memory leak? Introducing LeakCanary LeakCanary is an Open Source Java library to detect memory leaks in your debug builds. Let’s look at a cat example: class Cat { } class Box { Cat hiddenCat; } class Docker { static Box container; } // ... Box box = new Box(); Cat schrodingerCat = new Cat(); box.hiddenCat = schrodingerCat; Docker.container = box; You create a RefWatcher instance and give it an object to watch: // We expect schrodingerCat to be gone soon (or not), let's watch it. refWatcher.watch(schrodingerCat); When the leak is detected, you automatically get a nice leak trace: * GC ROOT static Docker.container * references Box.hiddenCat * leaks Cat instance We know you’re busy writing features, so we made it very easy to setup. With just one line of code, LeakCanary will automatically detect activity leaks: public class ExampleApplication extends Application { @Override public void onCreate() { super.onCreate(); LeakCanary.install(this); } } You get a notification and a nice display out of the box: Conclusion After enabling LeakCanary, we discovered and fixed many memory leaks in our app. We even found a few leaks in the Android SDK. The results are amazing. We now have 94% fewer crashes from OOM errors. If you want to eliminate OOM crashes, install LeakCanary now!
https://medium.com/square-corner-blog/leakcanary-detect-all-memory-leaks-875ff8360745?source=collection_home---7------1-----------
CC-MAIN-2018-22
refinedweb
653
59.3
Group, Again, self teaching has its limitations. Below is code that ultimately will answer question 3: design a program that finds all numbers from 1 to 1000 whose prime factors, when added together, sum up to a prime number. I am starting with a string limit of 1 to 100 to keep it manageable. I have labeled each unique output with a new name so ultimately I can harvest them. I think a few more iterations will gt me all that I need. The problem is,in the code below, the output starts at twenty. I do not understand why it does not output starting at least at 4 / 2 = 2. Can anyone help me? Thank in advance, Dan Code:#include <iostream> #include <string> using namespace std; int i; int j; int k; int l; int m; int main () { // determine what the numbers prime factors are for ( int i = 2; i < 100; i++) { for ( int j = 2; j < i; j++ ) { if ( i % j == 0 ) { int k = i / j; cout << i << " i" << " div " << j << " j" << " = " << k << " k" << "\n"; if ( k % j == 0) { int l = j; int m = k / l; cout << "\t" << " and " << k << " k" << " div " << l << " lj" << " = " << m << " m" << "\n"; } } } } }
http://cboard.cprogramming.com/cplusplus-programming/156935-another-question-about-syntax.html
CC-MAIN-2015-40
refinedweb
199
83.09
Azure Logic Apps is cloud-based service that provide serverless architecture for your Business operations. A developer can develop and apply the cloud-based integrations and workflows within no-time as he gets components prepared and available. It’s a fully cloud service that there are no need of any hardware or software infrastructure required. They can connect with any big data applications or 3rd party solutions and can even custom-built their own applications. Technique of Logic Apps The Logic is simple. They have predefined “connectors” and “actions” which can be used like inter-related blocks to create n no.of cloud-based workflows. In this Article, you are going to learn about one such Logic Apps called as Azure Service Bus First, you need to understand the concept of Decoupled systems. Two or more systems who does not have or have minimal knowledge on each other are interfaced without being connected. Okay!. Now let’s get back to our Azure Service Bus. Azure Service Bus provides an ability to share data between these decoupled systems. Any system or application need built so that they can share the messages without disturbing the established network and firewall protocols. Azure Service Bus allows all the above given communications possible between decoupled systems. It is a multi-user shared cloud service provider. An end application will create a namespace and defines the communication mechanism for each namespace. There are 3 different types of Communication Mechanisms: Queue, Topic and Relay. Queue: A sender sends a message to the queue and a receiver receives that message after sometime datacenter behind why you are able to check-in through your mobile before reaching the airport. Benefits of Azure Service Bus It allows message of size 256 KB to 1 MB. a queue can store many messages at once and have a storage capacity up to 5 GB. Connects or decouples any no.of on premises systems. Protects from temporary spikes in traffic. It is so dependent on TCP which may require opening outbound ports on the firewall. Free Demo for Corporate & Online Trainings.
https://mindmajix.com/azure-logic-apps
CC-MAIN-2019-13
refinedweb
348
57.27
What’s the best feature you can add to your applications today to appeal to customer upgrades? It’s Windows Vista support. Delphi 2007 is the first and only (until C++Builder 2007 ships in June) Vista ready development environment for building and seamlessly upgrading rich Windows GUI applications for Windows Vista. Delphi 2007’s Vista ready VCL makes it as easy as a recompile, in most cases, to upgrade older VCL framework based applications to instantly support Windows Vista and the Aero UI and desktop effects. Microsoft has announced that it has shipped 40m Windows Vista licenses in 100 days and that number is going to keep climbing given that nearly every new PC is shipping with Vista today. Delphi 2007, and soon C++Builder 2007, you can provide high impact upgrades for your end user applications and customers with minimal effort. And because Delphi is the only shipping Windows dev platform with Vista support today you have a competetive advantage over your competition who might not be using Delphi. And what do customers think of Delphi 2007? "I love the new Delphi 2007. It’s sooooo fast and reliable. It’s amazing how fast it now loads the IDE, even with tons of third party components. This is an impressive upgrade. After I use it for a few days I just can’t go back to Delphi 7. I have to say, you guys rock. Thanks for a brilliant tool." Rui Menino, CTO, EISA "Delphi 2007 for Win32 is a must-have IDE for any Windows developer!" Markus Spoettl toolsfactory software inc. "CodeGear’s stellar commitment to the developer community forms a sharp contrast with other technologies that are literally "invented for obsolescence". Delphi 2007 is the only Win32 IDE with support for building native Win32 Vista applications with advanced features like Vista Glass, and in the good Delphi tradition, without any changes to existing source code. Even .NET applications require a major UI overhaul just to be glass compliant." Sinan Karaca - InstallAware "Delphi2007 is the best native WIN32 development tool on the market. Now it is faster, more powerful and reliable as never before. If you still use Delphi 7 definitely it is a time to upgrade." Tomasz Kosinski "Delphi 2007 is exceeding all my expectations! As a Micro ISV, the tremendous increase in speed and stability means I can run Delphi 2007 all day, be a lot more productive. This is the best release since Delphi 7! Thank you CodeGear for listening!" Eric Fortier Tech Logic, Inc. "Yes! Developers matters for CodeGear! They did a tremendous job to make Delphi 2007 for win32 the best and fastest IDE ever realized since D7! The Delphi Spirit is back" Stéphane Wierzbicki - Responsable Informatique "A development powerhouse for RAD Win32 and a pinnacle of performance and quality!" Jarrod DavisSoftBeat - Download.Install.Play. Share This | Email this page to a friend D2007- Speed, Stability, the doc-o-matic Help System, WSDL import , IW9 and elegant features on the IDE for new entrants like us mattered after the ghastly introductory experience of Delphi 2005. It is fun to work with even for novices like us. After examining the cost of VS 2005 TEAM edition it is a comfort feeling with D2007. What matters to customers is quality and ability to speculate & address their problems in time, the last things are the IDE, Technology and other FUD food vista, 64 bit … We have to be ready, as windows developers, for whenever Vista really is ready. After spending two weeks working in Vista as my all-day-every-day operating system, I have to say it SUCKS. 1. You can disable those security-prompt dialog boxes, but not the delays associated with these layers in Windows. My Vista desktop freezes for 10-30 seconds, five to fifty times per hour during normal software development tasks. 2. Empty the trash can. Copy some files. Get used to waiting forever for it to do something simple, because whatever it says it’s doing, it merely says it’s "estimating how long" it will take to do something that took a few seconds in Windows XP. 3. The search feature still sucks. Indexing sucks. It doesn’t work very well at all for me. I can’t find anything, even though it says it indexed my C and D drives. Unlike OS X (my Mac), Search on Windows still sucks. Google Desktop running on XP is good enough for now. I won’t go back on Vista as a development platform, and will only use it inside a VM for testing, or on boxen that are only used as test boxen. My main PC isn’t booting into Vista until service pack 1 addresses (hopefully) the most egregious flaws in Vista. Warren @warren: "My Vista desktop freezes for 10-30 seconds, five to fifty times per hour during normal software development tasks." Check the event log: iastorv errors? For me, flashing the firmware in my sony DVD drive fixed this. But, the drive worked fine in XP Any estimated date (other than "soon") for some fixes to Delphi 2007? It’s over 2 months now and even Vista support, which you’re touting right now, isn’t *that* well done. Problems are still present with the MainFormOnTaskbar setting, not that I care much about this one in particular as I don’t plan on doing Vista or using Vista anytime soon, but there are a few other nuisances like a still malfunctioning help that would be *very* welcome changes… Since you guys are starting some "forward looking statements" by saying that a new Ruby IDE will be available in 2nd half 2007, maybe we can know an estimated date for the first SP1 or hotfix? As I’ve said before, and won’t be tired of repeating: it’s better to target a single faulty area at once and release a fix for that area than wait 6 months for a monster fix to fix them all… Later, Madruga Interestingly here in the U.K. a number of PC builders are using, ‘We still give you the choice of XP’ as a selling point! I notice that Dell were forced to start offering XP again as well. I have Vista on my **fast** secondary PC for testing purposes, and the only way I can describe the sensation of using it, is as claustrophobic. All the time it feels like you are wading through treacle, buried under tons of sluggish animations, constant security dialogs and agonisingly long pauses when you try and do anything. It completely breaks my train of thought when I try and do anything with it. My perfect OS would be an updated version of Windows 2000 to XP levels of compatability, I’m certainly in no rush to ‘upgrade’ to Vista for day to day usage and I bet most businesses won’t be stampeding to upgrade so that they can… actually what is the point of upgrading to Vista? I’ve been developing a Vista app on Vista with Delphi 2007, and haven’t experienced any problems. Even the help is heads about D2006 help. Index actually works as expected. This is on a Sony laptop with 1 gig of ram and 2 80 gig hard drives. No matter how bad Vista is, we all will be there in a several months (just after SP1-2 or a like). As for Delphi the next important step in programmer’s productivity would be ECO for Win32. Customers want "functions on time" first, comfort UI second and skins (glass frames, etc) third. If it’s business application of cource. Hi Michael, Yes, it’s great that Delphi is the first native Vista development platform, and it’s certainly an improvement compared to previous version of Delphi. We have been able to upgrade our 200K lines Delphi 3/7 project to Delphi 2007 so it now supports Vista, which is great. But we also have the following experiences: - bds.exe easily allocates 400MB of RAM after a short period of use. We don’t use the "Model view" - We do experience that the IDE crashes a couple of times per week - The online help is better than in D2006, but not compared to the help included in e.g. Delphi 3/7. E.g. it’s not possible to look up "AssignFile", but you have to look up "System.AssignFile". I.e. you need to know the namespace of the function you want to find. And the enumerations are not described (look e.g. at "Forms.TFormStyle Enumeration" which just seems to be an auto-generated page) Working in both VS2005 and D2007, I must say that VS2005 is by far the most robust environment, and CodeGear still have some work to do before D2007 will reach VS2005’s level of robustness. So my experience is that you should not sell Delphi 2007 on "robustness and stability" but rather on the Vista support and the community support (which I find is very good compared to .NET - e.g. great move to include DUnit, FastMem and FastProject.org code in Delphi, instead of writing you own tools) A friend of mine have recently deleted preinstalled Vista from his new notebook and installed XP. So it is not 40 million reasons. At least one less. A Big mistake by chaining themselfs to the .NET platform, even VS2005 it’s a big solid enviroment that works with a PC with the so called BuySomeRAM.NET Platform. I think that most of us, (the 95%) hates window vista by itself, its a bad operating system, lets try doing the things good… Like the non-obsolete great Delphi 7. Server Response from: blogs2.codegear.com
http://blogs.embarcadero.com/michaelswindell/2007/05/16/34553
crawl-002
refinedweb
1,625
70.33
I need to find out all of the information about a file entered as a command line argument. I'm using stat() to find out the simpler information such as file size and date last accessed. However, I need to find other info, such as the username of the file owner, not just their uid, as well as their groups, not just there gid. I can find the uid and gid fine with stat(). No idea how to find this stuff. Searched online but there was surprisingly little stuff about stat. Also, with the dates accessed, it gives it in seconds since 1970. How would I convert that to a date? Here is what I have so far: #include <string.h> #include <stdio.h> #include <time.h> #include <errno.h> #include <sys/types.h> #include <sys/stat.h> #include <dirent.h> #include <unistd.h> #include <pwd.h> #include <grp.h> #include <locale.h> #include <langinfo.h> #include <stdint.h> int main(int argc, char *argv[]){ FILE *fp; struct stat fileAtt; if(stat(argv[1], &fileAtt) == 0){ printf("File Name: %s\n", argv[1]); printf("UID : %d\nGID : %d\n", fileAtt.st_uid, fileAtt.st_gid); printf("File size is: %lu\n", fileAtt.st_size); printf("Last accessed: %u\n", fileAtt.st_atime); } return 0; } Thanks in advance, JB
http://www.dreamincode.net/forums/topic/159805-stat-and-information-about-file-in-c/
CC-MAIN-2017-17
refinedweb
214
89.55
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 25 50 100 250 results of 74 paida-3.2.1_2.3.1 was released on 2004/10/16. [3.2.1_2.3.1] (AIDA3.2.1_PAIDA2.3.1) ### Bug fixes ### - XML storage file - The attempt to read weightedRmsX, weightedRmsY and weightedRmsZ were still made. - Creating a ITuple object with at least one negative default value raised an exception. - The string type column was expressed as not 'string' but 'String'. - Reading a column name with no default value like 'momentum=' raised an exception. - Unnecessary tree window was opened when the XML file has ITuple data. - The tree window did not disappear when closed. Hi Alex, I'm glad to receive your e-mail. It always makes PAIDA much better! > . > (4) storage of strings in ntuples: > PAIDA writes strings with too many quotation marks in the xml file, > i.e. > instead of "my text" it writes "'my text'" I've comfirmed the miscodings then they will be fixed in the next bug fix release. > (3) storage of ntuples: names > JAIDA stores names with a la "ev=" "momentum=", i.e. with the equal > sign > even if there is no default value. PAIDA doesnt write "=" if there is > no > default value. > Again, I don't know what the behaviour should be. This is intended behaviour but it's not good not to be able to read from JAIDA's XML file. - JAIDA (or JAS3) can read both "momentum=" and "momentum" expressions. - PAIDA can read only "momentum" expression. This will be fixed in the next bug fix release. > (5) plotting: > well this one is a matter of taste but the box which is drawn around > each > region is either useless (in one-region plots) or too much and > confusing > (in multi-region plots).. at least for my taste. but thats not really > an > issue.. OK, you have a point there. I think it's better that the user can select the line and filling color of the box and the default is: - line : transparent - filling : white If there is another idea, please let me know. The changes will be included into the next minor version up release. (in the not-so-distant future) > (6) tree windows: > when closing a tree, its window should disappear?! (but doesnt?!) > moreover, sometimes it seems to get confused and opens an additional > window > for each ntuple inside a tree. maybe it would be better to put all > trees > in a single window anyway..? They are my mistakes. - The window should disappear when closed. - Unnecessary tree window was opened when the XML file has ITuple data. They will be fixed in the next bug fix release. Cheers Koji Hi Koji, thanks for the new release. Now it works better and better (especially the DOCTYPE fix helps a lot) and the problems become smaller and smaller, but I still found a few: . paida-3.2.1_2.3 was released on 2004/09/01. [3.2.1_2.3] (AIDA3.2.1_PAIDA2.3) ### New features ### - Now works correctly on Windows platform. - Simple texts box for info.addText() is supported. - More faster on plotter.writeToFile() on MacOSX platform. - Experimental support for time scaled axis. - New faster fitting engine "simplePAIDA" which calculates only parabolic error. (not both parabolic and asymmetric errors.) ### Bug fixes ### - Old legends were always replaced by new legend. - Overlay plotting didn't work properly. (Histogram2D, Cloud2D, Profile2D, DataPointSet2D, Cloud3D and DataPointSet3D) - Plotting histogram1D or cloud1D with histogram format option was incorrect when the minimum X axis value is larger than 0.0. - "weightedRms" of each bin of histograms is omitted in the exported XML file. - Plotting objects which has zero entries raised an exception. - Improved support for GUI threading. - Reading <!DOCTYPE> line in a XML file raised an exception if you have no internet connection. Hi Alex, I've fixed all of the issues you pointed out. I'll release paida3.2.1_2.3 soon. Thank you for your helping! Regards, Koji Hi Peter, I have finally fixed the bug on Windows platform! I tested with Enthought Python 2.3.3. I'll release as paida-3.2.1-2.3 a couple of weeks later. If you need it immediately, please let me know. I'll send it in a separate email. Thanks Koji Hi Alex, I'm sorry for my late reply. I've been absent for a week. (i) Text in infobox I reached a determination. Now I think it will be better for PAIDA to be able to display three boxes: statistics, legends and texts. The texts box will display user-defined texts like fitted parameter values. (ii) DOCTYPE The Python documentation you suggested seems to be very helpful to this issue. I'll try to overwrite the handler. (iii) Fitting speed OK, in the next release, PAIDA will accept a option argument to limit the asymmetric error calculation. The fitting time will be reduced by about 50% at least. (iv) Plots with zero entries That's right. I had also found this bug and modified local copies to, as you mentioned, check the lower and upper values. Cheers, Koji Hi Koji, Text in infobox: The plotting implemtation in ANAPHE is only rudimentary, so I never really tried that. But in the code I see that the text is just appended to the legend items. It seems that there only seems to be one 'Infobox' which can contain text (user-defined or statistics/fitting params (i.e. annotations) of a histogram) or 'legends' (i.e. text with some graphics thing). This is also the impression I get from the AIDA definitions (which, I agree, are not very clear..) DOCTYPE: I looked a little bit around to find ways to suppress this problem: it seems that in the parser class () there is a function StartDoctypeDeclHandler which may need to be overwritten in order to stop it from looking up the dtd file on the web? I don't know but maybe this hint can help you.. Fitting speed: I think it would useful to have the possibility to do simple error calculation only (if it speeds things up). Especially when you are in an interactive session you don't want to wait more than a few seconds for a (simple) fit result; at the same time you don't need the most sophisticated errors. 'New' plotting bug: Plotting (1D) histograms with zero entries results in some math exception because it tries to have a Y axis between 0 and 0. I don't know what the optimal behaviour should be, but checking upperY and lowerY and setting them to 1 and 0, respectively can be a solution (other plotting tools write a big 'no entries' in the middle of the plot which I personally don't like too much..) I didn't check the behaviour of 2D and 3D. Cheers, Alex Hi Koji, Some little details about the legend box etc: When plotting several histograms into the same region (lets say 1D and with line color black,red,blue) and then plotting the legend box, all linestyles will be the last linestyle which was plotted, i.e. blue. This is due to the addLegend function in IInfo. Actually, python will only store a reference to the style and not the style itself. So it will always point to the same (last) linestyle. I do not know any elegant way to cure that. Anyway, creating a new empty linestyle and then copying the color and addint this new linestyle to the list solves the problem (but it's ugly and error prone). I guess you know how to do that properly. The same will probably apply to the other types of styles (marker/fill) which can be added to the legend. In IInfo, plotting of the 'text' is not implemented (right?!). That's a pity because it is only a few lines: one loop to find the maximum length and another one to actually print it into the legend box. At least that worked for me. I used the addText to print fit parameters. Or is there a way to put those into the statisticsbox? (btw: fitting is really slow.. is this only because it is scripted, or is there still some optimization possible? the fit quality is very nice though..) Then I get a few divisionbyzero exceptions when reading in some of my 2D histos. But here I will check first whether that is not the fault of my histos. Btw: none of my histos produced with ANAPHE has the 'weightedRms' parameter set. I don't know whether this is according to AIDA specs. Anyway, setting it to 0 instead of throwing an exception works for me (but of course this could be related to the problem I mention above..) I'll check that., Thank you for your continuing to use. (1) The issue of <!DOCTYPE> bothered me too. This behavior comes from Python's xml module. Because I didn't know how to avoid this, currently PAIDA doesn't generate the line <!DOCTYPE>. But, as you pointed out, if the importing xml file has the line, the xml module processes it and may raise an exception. Now, I'm going to try fixing it again! (2) I'm sorry but this seems to be a simple bug. I promise to fix in the next release. Koji Hi Sorry for my late response. I tried the fixes in 3.2.1_2.2.1 and now it reads in and plots my 2D datapointsets properly. Thanks a lot for the quick fix. However I found two new issues: (1) if I open an xml file with the dtd definition in the header, i.e. something like <!DOCTYPE aida SYSTEM ""; > it crashes if you don't have a web connection. If you are online, everything is fine. If you remove the line, everything is fine, too. I don't know whether this behaviour is a feature or whether it's the fault of some xml module. But it would be reasonable to ignore this line if no connection can be made. See for the output further down. (2) plotting an overlay of two different DataPointSets in the same region doesn't work (options: scatter and with error bars). Plotting them separately in different regions works fine. The error happens when calling region.plot() for the second time. Here again, I have no idea what is going on so I also append the output. Thanks for help Alex output to (1) Traceback (most recent call last): File "sampleDPS1.py", line 7, in ? tree = treeFactory.create("/home/alex/y2.xml","xml",True,False) File "/usr/lib/python2.3/site-packages/paida/paida_core/ITreeFactory.py", line 92, in create return ITree(storeName, storeType, readOnly) File "/usr/lib/python2.3/site-packages/paida/paida_gui/tkinter/ITree.py", line 1901, in __init__ parser.parse(fileObj)15, in prepare_input_source f = urllib2.urlopen(source.getSystemId())01, in http_open return self.do_open(httplib.HTTP, req) File "/usr/lib/python2.3/urllib2.py", line 886, in do_open raise URLError(err) urllib2.URLError: <urlopen error (-3, 'Temporary failure in name resolution')> output to (2) Traceback (most recent call last): File "sampleDPS1.py", line 42, in ? region.plot(dps2) File "/usr/lib/python2.3/site-packages/paida/paida_gui/tkinter/IPlotterRegion.py", line 862, in plot self._plot(data1, plotterStyle, options) File "/usr/lib/python2.3/site-packages/paida/paida_gui/tkinter/IPlotterRegion.py", line 896, in _plot self._plotDataPointSet2D(data, plotterStyle, options) File "/usr/lib/python2.3/site-packages/paida/paida_gui/tkinter/IPlotterRegion.py", line 2029, in _plotDataPointSet2D canvasX = convertX(lowerX, upperX, valueX) UnboundLocalError: local variable 'convertX' referenced before assignment Killed Thank you for your reply. OK, now there must be a bug in PAIDA (on Windows). Please give me some time to fix it. Concerning sampleBasic.py, I'm sorry but it's just my fault. I'll fix it too. Koji On 2004/09/09, at 18:46, Peter Olsen wrote: >@... I'm using the newest version of paida and the examples off of the web page, ndex.html. This is once sub.sub.version behind, but it's the documentation page pointed to by the latest version. (Perhaps that's the problem right there. I didn't think of that.) I'll be away from my office computer today, but you can still reach me from-time-to-time at pcolsen@... The actual python I was using was Enthought's modification of Python 2.3.4 for windows. Enthought has packaged Numeric, SciPy, and several other things in the basic distribution so they are available straight out-of-the-box. I've been trying to install and run paida, but I can't get any of the examples to run. Every time I try, python freezes solid. I have to kill the process to escape. I've tried several programs from the documentation, but they all fail. (I had some trouble downloading the first, I believe I've traced the problem (or at least the first problem) to a specific line. I've included below the simplest script I've found. I've shown where I think things are falling apart. I'm down to a short deadline for a data analysis problem, so I'd really appreciate some help. Otherwise I'll be using gnuplot. That's nice, but I think paida will be nicer. ------------------------------------------------------------ ---------------------------------- from paida.paida_core import IAnalysisFactory # Analysis factory. af = IAnalysisFactory.create() # Tree factory. tf = af.createTreeFactory() # Tree with zipped XML storing. tree = tf.create("test.aida", 'xml', False, True, "compress=yes") <<<<<<<<<< I think things hang here. # Histogram factory. hf = af.createHistogramFactory(tree) # 1d histogram. h1d = hf.createHistogram1D('test name', 'test title', 10, 0.0, 10.0) ## You can use Int in lower edge value etc. ## I think this is more Python-like. ## h1d = hf.createHistogram1D('test name', 'test title', 10, 0, 10) # Filling. h1d.fill(2.0) h1d.fill(4.0) ## Off course you can write ## h1d.fill(2) ## h1d.fill(4) # Check. print 'Mean is 3.0? :', h1d.mean() # Save all data. tree.commit() tree.close() Peter Olsen, PE, Ae.E. National Security Analysis Group The MITRE Corporation 877-631-6178 (pager) paida-3.2.1_2.2.1 was released on 2004/09/01. [3.2.1_2.2.1] (AIDA3.2.1_PAIDA2.2.1) This is the bug fix release. ### Bug fixes ### - A tree did not export dataPointSet objects correctly to a XML file. - A tree did not import dataPointSet objects correctly from a XML file. Hi Alex, Thank you for your detailed pointing out and it was very helpful to the debugging. As you say, they were simple bugs and I've fixed them. - checking </dataPointSet> correctly - adding a closing tag </dataPoint> for every dataPoint - filling imported dataPointSet data to all dimensions correctly I will release a bug fix version 3.2.1_2.2.1 soon. Please try it out. Cheers, Koji Hi all, I am running 3.2.1_2.2 on a SUSE 9.1 box and have a number of problems with DataPointSets reflected in the 'history' of things I did: -I tried to open an xml file created with ANAPHE (from CERN) with a 2D DataPointSet and it fails with an unknown tag exception in paida_gui/tkinter/ITree.py (line 1360 or so..) -First I thought this is due to incompatible xml files so I changed the DataPointSet example to store the tree in a file and tried to read it in. Again exception about unkown tag "dataPointSet". -I tried to find out why and it seems that once inside a DataPointSet tag it does not check for a </dataPointSet> end tag but instead for a "profile2d" end tag. -After this fix it does not fail to read in the datapointset from the example but it only reads in the first of the eight datapoints (confirmed by the 'size'). Looking into the xml file it seems that the end tag for all datapoints is missing. A cross-check with my file from ANAPHE confirms that there should be one. -Reading in the datapointset from the ANAPHE file works. Even size and dimension give the correct values. But when trying to plot it, it throws another exception and I gave up (not even remembering where exactly it fails...) Any help/ideas/comments? I assume the problems are due to quite simple bugs which could only survive because DataPointSets may not be as heavily tested as histograms and ntuples?? Thanks Alex paida3.2.1_2.2 was released on 2004/08/19. [3.2.1_2.2] (AIDA3.2.1_PAIDA2.2) ### New features ### - Now PAIDA can plot all types of AIDA objects including 3D histogram, cloud and dataPointSet. - More precise and the same position control on various platforms. - Simple tree window. - Transparent color is supported. Setting color to "" will select this behavior. ### Bug fixes ### - The output postscript file is more precise now. - PAIDA attempts the default font to be set to "Courier". - Profile2D was not created correctly. - .binEntriesX() etc. in histogram2D, histogram3D and profile2D raised an exception. - .fill() in histogram3D raised an exception. - .createCopy() in histogramFactory raised an exception. - .symlink() in tree raised an exception. Enjoy! paida3.2.1_2.1.1 was released on 2004/06/15. This is the bug fix release. - The tick line length was zero in a small plotter region. Enjoy!
http://sourceforge.net/p/paida/mailman/paida-users/?style=flat&page=2
CC-MAIN-2015-22
refinedweb
2,899
68.26
15 January 2013 23:03 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Spot prices for ethane and propane declined the most of the NGLs, but normal butane, isobutane and natural gasoline fell as well. Ethane and propane’s precipitous drop was due to elevated production levels and the warm winter of 2011, the EIA reported. “Both production and stocks of ethane and propane were high, depressing prices and causing an increasing amount of ethane to be rejected,” the EIA said. .” The EIA said ethane prices in 2012 were down 48% on average compared with prices in 2011. The percentage is in line with the average ICIS-assessed spot prices for ethane in 2011 and 2012. The average spot price for ethane in 2012 was 39.93 cents/gal. In 2011, it was 77.19 cents/gal, according to ICIS. The EIA said propane spot prices in 2012 were 32% below 2011 prices, nearly the same percentage as the ICIS-assessed average spot prices for propane in 2011 and 2012. The average spot price for propane in 2012 was 100.31 cents/gal, which is about 31% lower than 2011, when it was 146.41 cents/gal. Annual average spot prices for normal butane, isobutane and natural gasoline were down relative to 2011 by 5%, 12% and 7%, respectively, the EIA said. According to ICIS Pricing, spot prices for normal butane and isobutane were down 10% and 11%, respectively. ICIS did not assess the price for natural gasoline until 2012. Normal butane, isobutane and natural gasoline have maintained higher price levels than ethane or propane due to their use as gasoline blendstock and their competition with crude oil and crude derivatives, which leads to those NGLs being priced in relation to crude oil,
http://www.icis.com/Articles/2013/01/15/9632024/us-spot-prices-for-ngls-decrease-in-2012-year-on-year.html
CC-MAIN-2014-41
refinedweb
292
70.73
Now that we have a understanding of the very basics of C, it is time now to turn our focus over to making our programs not only run correctly but more efficiently and are more understandable. char *strdup(char *s) int add_two_ints(int x, int y) void useless(void)The first function header takes in a pointer to a string and outputs a char pointer. The second header takes in two integers and returns an int. The last header doesn't return anything nor take in parameters. int add_two_ints(int x, int y) return. The return value must be the same type as the return type specified in the function's interface. getopt. But since this function is not part of ANSI C, you must declare the function prototype, or you will get implicit declaration warnings when compiling with our flags. So you can simply prototype getopt(3) from the man pages: /* This section of our program is for Function Prototypes */ int getopt(int argc, char * const argv[], const char *optstring); extern char *optarg; extern int optind, opterr, optopt;So if we declared this function prototype in our program, we would be telling the compiler explicitly what getopt returns and it's parameter list. What are those extern variables? Recall that externcreates a reference to variables across files, or in other words, it creates file global scope for those variables in that particular C source file. That way we can access these variables that getopt modifies directly. More on getopt on the next section about Input/Output. int applyeqn(int F(int), int max, int min) { int itmp; itmp = F(int) + min; itmp = itmp - max; return itmp; }What does this function do if we call it with applyeqn(square(x), y, z);? What happens is that the int F(int)is a reference to the function that is passed in as a parameter. Thus inside applyeqnwhere there is a call to F, it actually is a call to square! This is very useful if we have one set function, but wish to vary the input according to a particular function. So if we had a different function called cubewe could change how we call applyeqnby calling the function by applyeqn(cube(x), y, z);. void swap(int x, int y) { int tmp = 0; tmp = x; x = y; y = tmp; }If you were to simply pass in parameters to this swapping function that swaps two integers, this would fail horribly. You'll just get the same values back. #. This listing is from Weiss pg. 104. The unconditional directives are: #include- Inserts a particular header from another file #define- Defines a preprocessor macro #undef- Undefines a preprocessor macro #define MAX_ARRAY_LENGTH 20Tells the CPP to replace instances of MAX_ARRAY_LENGTH with 20. Use #definefor constants to increase readability. Notice the absence of the ;. #include <stdio.h> #include "mystring.h"Tells the CPP to get stdio.h from System Libraries and add the text to this file. The next line tells CPP to get mystring.h from the local directory and add the text to the file. This is a difference you must take note of. #undef MEANING_OF_LIFE #define MEANING_OF_LIFE 42Tells the CPP to undefine MEANING_OF_LIFE and define it for 42. #ifndef IROCK #define IROCK "You wish!" #endifTells the CPP to define IROCK only if IROCK isn't defined already. #ifdef DEBUG /* Your debugging statements here */ #endifTells the CPP to do the following statements if DEBUG is defined. This is useful if you pass the -DDEBUGflag to gcc. This will define DEBUG, so you can turn debugging on and off on the fly! int square(int x) { return x * x; }We can instead rewrite this using a macro: #define square(x) ((x) * (x))A few things you should notice. First square(x)The left parentheses must "cuddle" with the macro identifier. The next thing that should catch your eye are the parenthesis surrounding the x's. These are necessary... what if we used this macro as square(1 + 1)?. #define swap(x, y) { int tmp = x; x = y; y = tmp }Now we have swapping code that works. Why does this work? It's because the CPP just simply replaces text. Wherever swap is called, the CPP will replace the macro call with the defined text. We'll go into how we can do this with pointers later.
http://randu.org/tutorials/c/basic2.php
crawl-001
refinedweb
722
72.97
The focus for the latest release of Ansible Container is on making builds faster through the availability of pre-baked Conductor images. The release landed this week thanks to the dedication of Joshua ‘jag’ Ginsberg, Ansible’s Chief Architect, who managed to put the finishing touches on the release while at AnsibleFest San Francisco. The Ansible Container project is dedicated to helping Ansible users re-use existing Ansible roles and playbooks to build containers, and deploy applications to OpenShift. The Conductor container is at the center of building, orchestrating, and deploying containers. It’s the engine that makes it all work, and it brings with it a copy of Ansible, a Python runtime, docker packages, and other dependencies. The first step, before any serious work gets done by the command line tool, is standing up a Conductor container. And up until now, that meant building the image from scratch, and waiting through all the package downloading and installing. This happens at the start of a project, and repeats anytime you find yourself needing to rebuild from scratch. With this release, the team has made available a set of pre-baked images based on several distributions that are popular within the community. These images are currently available on Docker Hub under the Ansible namespace. There you’ll find conductor images built on the following distributions: - Centos 7 - Fedora 24, 25, 26 - Debian jessie, stretch, wheezy - Ubuntu precise, trusty, xenial, zesty - Alpine 3.4, 3.5 From a user standpoint there’s no change to an existing project. As long as the base image for your Conductor is on the list above, Ansible Container will download and use the new images, rather than building from scratch. If the distribution you’re looking for isn’t available, or you want to build a custom Conductor image, there’s a guide on the project’s doc site. You can also follow the example in the project’s .travis.yml file, to see how a local copy of each image is built and tested. As the team continues working toward a 1.0 release, they plan to further refine the Conductor images. The road map currently includes adding security scans, and automating a nightly build and testing process based on the latest code in development. Other key features that landed with this release include support for deploying Secret objects to OpenShift. Secrets can be seeded using Ansible Vault files. You’ll find documentation in the container.yml reference guide. There’s also support for deploying multi-container pods, which you can learn about in the related guide. The added support for these deployment objects continues to improve Ansible Container’s ability to deploy complex applications. For a complete list of all the modifications and issues resolved in 0.9.2, check out the project change log. We’re excited to hear your feedback, so please reach out to us at the #ansible-container channel on irc.freenode.net, subscribe to the mailing list, or open an issue at the project repo.
https://www.ansible.com/blog/faster-builds-with-ansible-container-0.9.2
CC-MAIN-2022-27
refinedweb
506
54.73
View all headers Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!newsfeed.stanford.edu!newsfeed.berkeley.edu!news-hog.berkeley.edu!ucberkeley!newshub.sdsu.edu!west.cox.net!cox.net!newsfeed1.earthlink.net!newsfeed.earthlink.net 06 of 14) Summary: Please read this before posting to comp.lang.c++ Followup-To: comp.lang.c++ Reply-To: cline@parashift.com (Marshall Cline) Distribution: world Approved: news-answers-request@mit.edu Expires: +1 month Lines: 1411 Message-ID: <C9xP8.32390$Tg1.3471204586@newssvr11.news.prodigy.com> NNTP-Posting-Host: 66.140.59.233 X-Complaints-To: abuse@prodigy.net X-Trace: newssvr11.news.prodigy.com 1024368354 ST000 66.140.59.233 (Mon, 17 Jun 2002 22:45:54 EDT) NNTP-Posting-Date: Mon, 17 Jun 2002 22:45:54:45:54 GMT Xref: senator-bedfellow.mit.edu comp.lang.c++:653619 comp.answers:50346 news.answers:232296 alt.comp.lang.learn.c-c++:125489 [11]: Destructors [11.1] What's the deal with destructors? A destructor gives an object its last rites. Destructors are used to release any resources allocated by the object. E.g., class Lock might lock a semaphore, and the destructor will release that semaphore. The most common example is when the constructor uses new, and the destructor uses delete. Destructors are a "prepare to die" member function. They are often abbreviated "dtor". ============================================================================== [11.2] What's the order that local objects are destructed? In reverse order of construction: First constructed, last destructed. In the following example, b's destructor will be executed first, then a's destructor: void userCode() { Fred a; Fred b; // ... } ============================================================================== [11.3] What's the order that objects in an array are destructed? In reverse order of construction: First constructed, last destructed. In the following example, the order for destructors will be a[9], a[8], ..., a[1], a[0]: void userCode() { Fred a[10]; // ... } ============================================================================== [11.4] Can I overload the destructor for my class? No. You can have only one destructor for a class Fred. It's always called Fred::~Fred(). It never takes any parameters, and it never returns anything. You can't pass parameters to the destructor anyway, since you never explicitly call a destructor[11.5] (well, almost never[11.10]). ============================================================================== [11.5] Should I explicitly call a destructor on a local variable? No! The destructor will get called again at the close } of the block in which the local was created. This is a guarantee of the language; it happens automagically; there's no way to stop it from happening. But you can get really bad results from calling a destructor on the same object a second time! Bang! You're dead! ============================================================================== [11.6] What if I want a local to "die" before the close } of the scope in which it was created? Can I call a destructor on a local if I really want to? No! [For context, please read the previous FAQ[11.5]]. Suppose the (desirable) side effect of destructing a local File object is to close the File. Now suppose you have an object f of a class File and you want File f to be closed before the end of the scope (i.e., the }) of the scope of object f: void someCode() { File f; // ... [This code that should execute when f is still open] ... // <-- We want the side-effect of f's destructor here! // ... [This code that should execute after f is closed] ... } There is a simple solution to this problem[11.7]. But in the mean time, remember: Do not explicitly call the destructor![11.5] ============================================================================== [11.7] OK, OK already; I won't explicitly call the destructor of a local; but how do I handle the above situation? [For context, please read the previous FAQ[11.6]]. Simply wrap the extent of the lifetime of the local in an artificial block {...}: void someCode() { { File f; // ... [This code will execute when f is still open] ... } // ^-- f's destructor will automagically be called here! // ... [This code will execute after f is closed] ... } ============================================================================== [11.8] What if I can't wrap the local in an artificial block? Most of the time, you can limit the lifetime of a local by wrapping the local in an artificial block ({...})[11.7]. But if for some reason you can't do that, add a member function that has a similar effect as the destructor. But do not call the destructor itself! For example, in the case of class File, you might add a close() method. Typically the destructor will simply call this close() method. Note that the close() method will need to mark the File object so a subsequent call won't re-close an already-closed File. E.g., it might set the fileHandle_ data member to some nonsensical value such as -1, and it might check at the beginning to see if the fileHandle_ is already equal to -1: class File { public: void close(); ~File(); // ... private: int fileHandle_; // fileHandle_ >= 0 if/only-if it's open }; File::~File() { } void File::close() { if (fileHandle_ >= 0) { // ... [Perform some operating-system call to close the file] ... fileHandle_ = -1; } } Note that the other File methods may also need to check if the fileHandle_ is -1 (i.e., check if the File is closed). Note also that any constructors that don't actually open a file should set fileHandle_ to -1. ============================================================================== [11.9] But can I explicitly call a destructor if I've allocated my object with new? Probably not. Unless you used placement new[11.10], you should simply delete the object rather than explicitly calling the destructor. For example, suppose you allocated the object via a typical new expression: Fred* p = new Fred(); Then the destructor Fred::~Fred() will automagically get called when you delete it via: delete p; // Automagically calls p->~Fred() You should not explicitly call the destructor, since doing so won't release the memory that was allocated for the Fred object itself. Remember: delete p does two things[16.8]: it calls the destructor and it deallocates the memory. ============================================================================== [11.10]> // Must #include this to use "placement new" #include "Fred.h" // Declaration of class Fred void someCode() { char memory[sizeof(Fred)]; // Line #1 void* place = memory; // Line #2 Fred* f = new(place) Fred(); // Line #3 (see "DANGER" below) // The pointers f and place will be equal // ... } Line #1 creates an array of sizeof(Fred) bytes of memory, which is big enough to hold a Fred object. Line #2 creates a pointer place that points to the first byte of this memory (experienced C programmers will note that this step was unnecessary; it's there only to make the code more obvious). Line #3 essentially just calls the constructor Fred::Fred(). The this pointer in the Fred constructor will be equal to place. The returned pointer f will therefore be equal to place. ADVICE: Don't use this "placement new" syntax unless you have to. Use it only when you really care that an object is placed at a particular location in memory. For example, when your hardware has a memory-mapped I/O timer device, and you want to place a Clock object at that memory location. DANGER: You are taking sole responsibility that the pointer you pass to the "placement new" operator points to a region of memory that is big enough and is properly aligned for the object type that you're creating. Neither the compiler nor the run-time system make any attempt to check whether you did this right. If your Fred class needs to be aligned on a 4 byte boundary but you supplied a location that isn't properly aligned, you can have a serious disaster on your hands (if you don't know what "alignment" means, please don't use the placement new syntax). You have been warned. You are also solely responsible for destructing the placed object. This is done by explicitly calling the destructor: void someCode() { char memory[sizeof(Fred)]; void* p = memory; Fred* f = new(p) Fred(); // ... f->~Fred(); // Explicitly call the destructor for the placed object } This is about the only time you ever explicitly call a destructor. Note: there is a much cleaner but more sophisticated[11.14] way of handling the destruction / deletion situation. ============================================================================== [11.11] When I write a destructor, do I need to explicitly call the destructors for my member objects? No. You never need to explicitly call a destructor (except with placement new[11.10]). A class's destructor (whether or not you explicitly define one) automagically invokes the destructors for member objects. They are destroyed in the reverse order they appear within the declaration for the class. class Member { public: ~Member(); // ... }; class Fred { public: ~Fred(); // ... private: Member x_; Member y_; Member z_; }; Fred::~Fred() { // Compiler automagically calls z_.~Member() // Compiler automagically calls y_.~Member() // Compiler automagically calls x_.~Member() } ============================================================================== [11.12] When I write a derived class's destructor, do I need to explicitly call the destructor for my base class? No. You never need to explicitly call a destructor (except with placement new[11.10]).. class Member { public: ~Member(); // ... }; class Base { public: virtual ~Base(); // A virtual destructor[20.5] // ... }; class Derived : public Base { public: ~Derived(); // ... private: Member x_; }; Derived::~Derived() { // Compiler automagically calls x_.~Member() // Compiler automagically calls Base::~Base() } Note: Order dependencies with virtual inheritance are trickier. If you are relying on order dependencies in a virtual inheritance hierarchy, you'll need a lot more information than is in this FAQ. ============================================================================== [11.13] Should my destructor throw an exception when it detects a problem? Beware!!! See this FAQ[17.3] for details. ============================================================================== [11.14] Is there a way to force new to allocate memory from a specific memory area? [UPDATED!] [Recently two typos ("myPool" vs. "pool") in the code were fixed thanks to Randy Sherman (in 5/02).] Yes. The good news is that these "memory pools" are useful in a number of situations. The bad news is that I'll have to drag you through the mire of how it works before we discuss all the uses. But if you don't know about memory pools, it might be worthwhile to slog through this FAQ -- you might learn something useful! First of all, recall that a memory allocator is simply supposed to return uninitialized bits of memory; it is not supposed to produce "objects." In particular, the memory allocator is not supposed to set the virtual-pointer or any other part of the object, as that is the job of the constructor which runs after the memory allocator. Starting with a simple memory allocator function, allocate(), you would use placement new[11.10] to construct an object in that memory. In other words, the following is morally equivalent to "new Foo()": void* raw = allocate(sizeof(Foo)); // line 1 Foo* p = new(raw) Foo(); // line 2 Okay, assuming you've used placement new[11.10] and have survived the above two lines of code, the next step is to turn your memory allocator into an object. This kind of object is called a "memory pool" or a "memory arena." This lets your users have more than one "pool" or "arena" from which memory will be allocated. Each of these memory pool objects will allocate a big chunk of memory using some specific system call (e.g., shared memory, persistent memory, stack memory, etc.; see below), and will dole it out in little chunks as needed. Your memory-pool class might look something like(); The reason it's good to turn Pool into a class is because it lets users create N different pools of memory rather than having one massive pool shared by all users. That allows users to do lots of funky things. For example, if they have a chunk of the system that allocates memory like crazy then goes away, they could allocate all their memory from a Pool, then not even bother doing any deletes on the little pieces: just deallocate the entire pool at once. Or they could set up a "shared memory" area (where the operating system specifically provides memory that is shared between multiple processes) and have the pool dole out chunks of shared memory rather than process-local memory. Another angle: many systems support a non-standard function often called. Okay,[26.13][11.10] is to explicitly call the destructor then explicitly deallocate the memory: void sample(Pool& pool) { Foo* p = new(pool) Foo(); ... p->~Foo(); // explicitly call dtor pool.dealloc(p); // explicitly release the memory } This has several problems, all of which are fixable: 1. The memory will leak if Foo::Foo() throws an exception. 2. The destruction/deallocation syntax is different from what most programmers are used to, so they'll probably screw it up. 3.() point is that the compiler deallocates the memory if the ctor throws an exception. But in the case of the "new with parameter" syntax (commonly called "placement new"), the compiler won't know what to do if the exception occurs so by default it does nothing: // This is functionally what happens with Foo* p = new(pool) Foo(): Foo* p; void* raw = operator new(sizeof(Foo), pool); // the above function simply returns "pool.alloc(sizeof. Problems #2 ("ugly therefore error prone") and #3 ("users must manually associate pool-pointers with the object that allocated them, which is error prone") are solved simultaneously with an additional 10-20 lines of code in one place. In other words, we add 10-20 lines of code in one place (your Pool header file) and simplify an arbitrarily large number of other places (every piece of code that uses your Pool class). The idea is to implicitly associate a(!)[16.2]()[16.2],. ============================================================================== SECTION [12]: Assignment operators [12.1] What is "self assignment"? Self assignment is when someone assigns an object to itself. For example, #include "Fred.hpp" // Declares class Fred void userCode(Fred& x) { x = x; // Self-assignment } Obviously no one ever explicitly does a self assignment like the above, but since more than one pointer or reference can point to the same object (aliasing), it is possible to have self assignment without knowing it: #include "Fred.hpp" // Declares class Fred void userCode(Fred& x, Fred& y) { x = y; // Could be self-assignment if &x == &y } int main() { Fred z; userCode(z, z); } ============================================================================== [12.2] Why should I worry about "self assignment"? If you don't worry about self assignment[12.1], you'll expose your users to some very subtle bugs that have very subtle and often disastrous symptoms. For example, the following class will cause a complete disaster in the case of self-assignment: class Wilma { }; class Fred { public: Fred() : p_(new Wilma()) { } Fred(const Fred& f) : p_(new Wilma(*f.p_)) { } ~Fred() { delete p_; } Fred& operator= (const Fred& f) { // Bad code: Doesn't handle self-assignment! delete p_; // Line #1 p_ = new Wilma(*f.p_); // Line #2 return *this; } private: Wilma* p_; }; If someone assigns a Fred object to itself, line #1 deletes both this->p_ and f.p_ since *this and f are the same object. But line #2 uses *f.p_, which is no longer a valid object. This will likely cause a major disaster. The bottom line is that you the author of class Fred are responsible to make sure self-assignment on a Fred object is innocuous[12.3]. Do not assume that users won't ever do that to your objects. It is your fault if your object crashes when it gets a self-assignment. Aside: the above Fred::operator= (const Fred&) has a second problem: If an exception is thrown[17] while evaluating new Wilma(*f.p_) (e.g., an out-of-memory exception[16.5] or an exception in Wilma's copy constructor[17.2]), this->p_ will be a dangling pointer -- it will point to memory that is no longer valid. This can be solved by allocating the new objects before deleting the old objects. ============================================================================== [12.3] OK, OK, already; I'll handle self-assignment. How do I do it? You should worry about self assignment every time you create a class[12.2].: Fred& Fred::operator= (const Fred& f) { if (this == &f) return *this; // Gracefully handle self assignment[12.1] // Put the normal assignment duties here... return *this; } This explicit test isn't always necessary. For example, if you were to fix the assignment operator in the previous FAQ[12.2] to handle exceptions thrown by new[16.5] and/or exceptions thrown by the copy constructor[17.2] of class Wilma, you might produce the following code. Note that this code has the (pleasant) side effect of automatically handling self assignment as well: Fred& Fred::operator= (const Fred& f) { // This code gracefully (albeit implicitly) handles self assignment[12.1] Wilma* tmp = new Wilma(*f.p_); // It would be OK if an exception[17] got thrown here delete p_; p_ = tmp; return *this; } In cases like the previous example (where self assignment is harmless but inefficient), some programmers want to improve the efficiency of self assignment by adding an otherwise unnecessary test, such as "if (this == &f) return *this;". It is generally the wrong tradeoff to make self assignment more efficient by making the non-self assignment case less efficient. For example, adding the above if test to the Fred assignment operator would make the non-self assignment case slightly less efficient (an extra (and unnecessary) conditional branch). If self assignment actually occured once in a thousand times, the if would waste cycles 99.9% of the time. ============================================================================== SECTION [13]: Operator overloading [13.1] What's the deal with operator overloading? It allows you to provide an intuitive interface to users of your class, plus makes it possible for templates[33.5] to work equally well with classes and built-in/intrinsic types. Operator overloading allows C/C++ operators to have user-defined meanings on user-defined types (classes). Overloaded operators are syntactic sugar for function calls: class Fred { public: // ... }; #if 0 // Without operator overloading: Fred add(Fred, Fred); Fred mul(Fred, Fred); Fred f(Fred a, Fred b, Fred c) { return add(add(mul(a,b), mul(b,c)), mul(c,a)); // Yuk... } #else // With operator overloading: Fred operator+ (Fred, Fred); Fred operator* (Fred, Fred); Fred f(Fred a, Fred b, Fred c) { return a*b + b*c + c*a; } #endif ============================================================================== [13.2] What are the benefits of operator overloading? By overloading standard operators on a class, you can exploit the intuition of the users of that class. This lets users program in the language of the problem domain rather than in the language of the machine. The ultimate goal is to reduce both the learning curve and the defect rate. ============================================================================== [13.3] What are some examples of operator overloading? Here are a few of the many examples of operator overloading: * myString + yourString might concatenate two std::string objects * myDate++ might increment a Date object * a * b might multiply two Number objects * a[i] might access an element of an Array object * x = *p might dereference a "smart pointer" that actually "points" to a disk record -- it could actually seek to the location on disk where p "points" and return the appropriate record into x ============================================================================== [13.4] But operator overloading makes my class look ugly; isn't it supposed to make my code clearer? Operator overloading makes life easier for the users of a class[13.2], not for the developer of the class! Consider the following example. class Array { public: int& operator[] (unsigned i); // Some people don't like this syntax // ... }; inline int& Array::operator[] (unsigned i) // Some people don't like this syntax { // ... } Some people don't like the keyword operator or the somewhat funny syntax that goes with it in the body of the class itself. But the operator overloading syntax isn't supposed to make life easier for the developer of a class. It's supposed to make life easier for the users of the class:. ============================================================================== [13.5] What operators can/cannot be overloaded? Most can be overloaded. The only C operators that can't be are . and ?: (and sizeof, which is technically an operator). C++ adds a few of its own operators, most of which can be overloaded except :: and .*. Here's an example of the subscript operator (it returns a reference). First without operator overloading: class]; } ============================================================================== [13.6] Can I overload operator== so it lets me compare two char[] using a string comparison? No: at least one operand of any overloaded operator must be of some user-defined type[25.10] [17.5] since arrays are evil[33.1]. ============================================================================== [13.7]. ============================================================================== [13.8] paramters are needed). For example: class Matrix { public: Matrix(unsigned rows, unsigned cols); double& operator() (unsigned row, unsigned col); double operator() (unsigned row, unsigned col) const; // ... ~Matrix(); // Destructor Matrix(const Matrix& m); // Copy constructor Matrix& operator= (const Matrix& m); // Assignment operator // ... private: unsigned rows_, cols_; double* data_; }; inline Matrix::Matrix(unsigned rows, unsigned cols) : rows_ (rows), cols_ (cols), data_ (new double[rows * cols]) { if (rows == 0 || cols == 0) throw BadIndex("Matrix constructor has 0 size"); }); // ... } ============================================================================== [13.9] Why shouldn't my Matrix class's interface look like an array-of-array? Here's what this FAQ is really all about: Some people build a Matrix class that has an operator[] that returns a reference to an Array object, and that Array object has an operator[] that returns an element of the Matrix (e.g., a reference to a double). Thus they access elements of the matrix using syntax like m[i][j] rather than syntax like m(i,j)[13.8]. The array-of-array solution obviously works, but it is less flexible than the operator() approach[13.8].[13.8]. As an example of when a physical layout makes a significant difference, a recent project happened to access the matrix elements in columns (that is, the algorithm accesses all the elements in one column, then the elements in another, etc.), and if the physical layout is row-major, the accesses can "stride the cache". For example, if the rows happen to be almost as big as the processor's cache size, the machine can end up with a "cache miss" for almost every element access. In this particular project, we got a 20% improvement in performance by changing the mapping from the logical layout (row,column) to the physical layout (column,row). Of course there are many examples of this sort of thing from numerical methods, and sparse matrices are a whole other dimension on this issue. Since it is, in general, easier to implement a sparse matrix or swap row/column ordering using the operator() approach, the operator() approach loses nothing and may gain something -- it has no down-side and a potential up-side. Use the operator() approach[13.8]. ============================================================================== [13.10] Should I design my classes from the outside (interfaces first) or from the inside (data first)? From the outside! A good interface provides a simplified view that is expressed in the vocabulary of a user[7.3]. In the case of OO software, the interface is normally the set of public methods of either a single class or a tight group of classes[14.2]. First think about what the object logically represents, not how you intend to physically build it. For example, suppose you have a Stack class that will be built by containing a LinkedList:? The key insight is the realization that a LinkedList is not a chain of Nodes. That may be how it is built, but that is not what it is. What it is is a sequence of elements. Therefore the LinkedList abstraction should provide a "LinkedListIterator" class as well, and that "LinkedListIterator" might have an(). The code follows. The important thing to notice is that LinkedList does not have any methods that let users access Nodes. Nodes are an implementation technique that is completely buried. This makes the LinkedList class safer (no chance a user will mess up the invariants and linkages between the various nodes), easier to use (users don't need to expend extra effort keeping the node-count equal to the actual number of nodes, or any other infrastructure stuff), and more flexible (by changing a single typedef, users could change their code from using LinkedList to some other list-like class and the bulk of their code would compile cleanly and hopefully with improved performance characteristics). #include <cassert> // Poor man's exception handling class LinkedListIterator; class LinkedList; class Node { // No public members; this is a "private class" friend LinkedListIterator; // A friend class[14] friend LinkedList; Node* next_; int elem_; }; class LinkedListIterator { public: bool operator== (LinkedListIterator i) const; bool operator!= (LinkedListIterator i) const; void operator++ (); // Go to the next element int& operator* (); // Access the current element private: LinkedListIterator(Node* p); Node* p_; friend; } Conclusion: The linked list had two different kinds of data. The values of the elements stored in the linked list are the responsibility of the user of the linked list (and only the user; the linked list itself makes no attempt to prohibit users from changing the third element to 5), and the linked list's infrastructure data (next pointers, etc.), whose values are the responsibility of the linked list (and only the linked list; e.g., the linked list does not let users change (or even look at!) the various next pointers). Thus the only get()/set() methods were to get and set the elements of the linked list, but not the infrastructure of the linked list. Since the linked list hides the infrastructure pointers/etc., it is able to make very strong promises regarding that infrastructure (e.g., if it was. Note: the purpose of this example is not to show you how to write a linked-list class. In fact you should not "roll your own" linked-list class since you should use one of the "container classes" provided with your compiler. Ideally you'll use one of the standard container classes[34.1] such as the std::list<T> template. ============================================================================== SECTION [14]: Friends [14.1] What is a friend? Something to allow your class to grant access to another class or function. Friends can be either functions or other classes. A class grants access privileges to its friends. Normally a developer has political and technical control over both the friend and member functions of a class (else you may need to get permission from the owner of the other pieces when you want to update your own class). ============================================================================== [14.2] Do friends violate encapsulation? No! If they're used properly, they actually.) ============================================================================== [14.3] What are some advantages/disadvantages of using friend functions? They provide a degree of freedom in the interface design options. Member functions and friend functions are equally privileged (100% vested). The major difference is that a friend function is called like f(x), while a member function is called like x.f(). Thus the ability to choose between member functions (x.f()) and friend functions (f(x)) allows a designer to select the syntax that is deemed most readable, which lowers maintenance costs. The major disadvantage of friend functions is that they require an extra line of code when you want dynamic binding. To get the effect of a virtual friend, the friend function should call a hidden (usually protected) virtual[20] member function. This is called the Virtual Friend Function Idiom[15.9]. For example: class Base { public: friend void f(Base& b); // ... protected: virtual void do_f(); // ... }; inline void f(Base& b) { b.do_f(); } class Derived : public Base { public: // ... protected: virtual void do_f(); // "Override" the behavior of f(Base& b) // ... }; void userCode(Base& b) { f(b); } The statement f(b) in userCode(Base&) will invoke b.do_f(), which is virtual isn't inherited, transitive, or reciprocal"? Just because I grant you friendship access to me doesn't automatically grant your kids access to me, doesn't automatically grant your friends access to me, and doesn't automatically grant me access to you. *. * You don't necessarily trust me simply because I declare you my friend. The privileges of friendship aren't reciprocal. If class Fred declares that class Wilma is a friend, Wilma objects have special access to Fred objects but Fred objects do not automatically have special access to Wilma objects. ============================================================================== [14.5] Should my class declare a member function or a friend function? Use a member when you can, and a friend when you have to. Sometimes friends are syntactically better (e.g., in class Fred, friend functions allow the Fred parameter to be second, while members require it to be first). Another good use of friend functions are the binary infix arithmetic operators. E.g., aComplex + aComplex should be defined as a friend rather than a member if you want to allow aFloat + aComplex as well (member functions don't allow promotion of the left hand argument, since that would change the class of the object that is the recipient of the member function invocation). In other cases, choose a member function over a friend function. ============================================================================== [ Usenet FAQs | Web FAQs | Documents | RFC Index ]
http://www.faqs.org/faqs/C++-faq/part6/
CC-MAIN-2017-26
refinedweb
4,841
56.55
Synopsis_17 - Concurrency [DRAFT] Elizabeth Mattijsen <liz@dijkmat.nl> Audrey Tang <autrijus@autrijus.org> Maintainer: Elizabeth Mattijsen <liz@dijkmat.nl> Date: 13 Jun 2005 Last Modified: 13 Nov 2005 Number: 0 Version: 1 This is a rough sketch of how concurrency works in Perl 6. (actually these are just random notes, put here under the release-early release-often principle, slowly being integrated in a more textual format. Patches welcome!) Transactionable Code blocks which are marked "is atomic". These sections are guaranteed to either be completed totally (when the Code block is exited), or have their state reverted to the state at the start of the Code block (with the retry statement). (EM: maybe point out if / how old style locks can be "simulated", for those needing a migration path?) my ($x, $y); sub c is atomic { $x -= 3; $y += 3; if $x < 10 { retry } }; $e = &c.retry_with( &d ); # $e(); if $i { is atomic; ... } else { ...; } A Code block can be marked as "is atomic". This means that code executed inside that scope is guaranteed not to be interrupted in any way. The start of a block marked "is atomic" also becomes a "checkpoint" to which execution can return (in exactly the same state) if a problem occurs (a.k.a. a retry is done) inside the scope of the Code block. The retry function basically restores the state of the thread at the last checkpoint and will wait there until an external event allows it to potentially run that atomic section of code again without having to retry again. If there are no external events possible that could restart execution, an exception will be raised. The last checkpoint is either the last atomic / non-atomic boundary, or the most immediate caller constructed with retry_with. The retry_with method on an atomic Code object causes a checkpoint to be made for retry, creating an alternate execution path to be followed when a retry is done. Because Perl 6 must be able to revert its state to the state it had at the checkpoint, it is not allowed to perform any non-revertable actions. These would include reading / writing from file handles that do not support seek (such as sockets). Attempting to do so will cause a fatal error to occur. If you're not interested in revertability, but are interested in uninteruptability, you could use the "is critical" trait. sub tricky is critical { # code accessing external info, not to be interrupted } if ($update) { is critical; # code accessing external info, not to be interrupted } A Code block marked "is critical" can not be interrupted in any way. But since it is able to access non-revertible data structures (such as non-seekable file handles), it cannot do a retry. The execution of co-routine (or "coro" for short) could be considered as a short "side-step" from the normal path of execution, much like the normal calling of a subroutine. The main difference with a normal subroutine, is that the co-routine supports a special type of return, called "yield". (EM: not sure whether the "threads->yield" causes so much mental interference that we should use something else for "yield" in the coro context. And whether we should have a seperate "coro" keyword at all: after all, the "yield" could be in a normal subroutine called from a coro, so it's not like the compiler would be allowed to flag "yield" in a sub as an error). ####################################################################### Below here still the more or less unorganized stuff CORE::GLOBAL::exit; # kills all the threads # We intententionally do not list cross-machine parallelism Conc:: classes here. # Consult your local 6PAN mirror with a time machine. use Conc::Processes; # fork() or createProcess based implementation use Conc::Threads; # maybe it just exports &async to override the default one, yay use Conc::Multiplex; # this is default my $thr = async { ...do something... END { } }; Conc::Thread.this Conc::Proc.this Conc object # name is still up for grabs! - numify to TIDs (as in pugs) - stringify to something sensible (eg. "<Conc:tid=5>"); - enumerable with Conc.list - Conc.yield (if this is to live but deprecated, maybe call it sleep(0)?) - sleep() always respects other threads, thank you very much - standard methods: - .join # wait for invocant to finish (always item cxt) - .die # throw exception in the invocant thread - .alarm # set up alarms - .alarms # query existing alarms - .suspend # pause a thread; fail if already paused - .resume # revive a thread; fail if already running - .detach # survives parent thread demise (promoted to process) # process-local changes no longer affects parent # tentatively, the control methods still applies to it # including wait (which will always return undef) # also needs to discard any atomicity context - attributes: - .started # time - .finished # time - .waiting # suspened (not diff from block on wakeup signal) # waiting on a handle, a condition, a lock, et cetera # otherwise returns false for running threads # if it's finished then it's undef(?) - .current_continuation # the CC currently running in that thread - "is throttled" trait method throttled::trait_auxillary: implmented using atomic+retry) } class Foo { method a is throttled(:limit(3) :key<blah>) { ... } method b is throttled(:limit(2) :key<blah>) { ... } } my Foo $f .= new; async { $f.a } async { $f.b } - Thread::Status - IO objects and containers gets concurrency love! - $obj.wake_on_readable - $obj.wake_on_writable - $obj.wake_on_either_readable_or_writable_or_passed_time(3); # fixme fixme - $obj.wake_on:{.readable} # busy wait, probably my @a is Array::Chan = 1..Inf; async { @a.push(1) }; async { @a.blocking_shift({ ... }) }; async { @a.unshift({ ... }) }; Communication abstractions - shared, transactional variables by default # program will wait for _all_ threads # unjoined threads will be joined at the beginning of the END block batch # of the parent thread that spawned them ### INTERFACE BARRIER ### module Blah; { is atomic; # retry/orelse/whatever other rollback stuff # limitation: no external IO (without lethal warnings anyway) # can't do anything irreversible fo ### my $sym; threads.new({ use Blah; BEGIN { require(Blah).import } my $boo; BEGIN { eval slurp<Blah.pm>; $boo := $Blah::boo }; ... }); Asynchronous exceptions are just like user-initiated exceptions with die, so you can also catch it with regular CATCH blocks as specified in S04. To declare your main program catches INT signals, put a CATCH block anywhere in the toplevel to handle exceptions like this: CATCH { when Error::Signal::INT { ... } }. Under supressed. ## braindump of coro meeting by Liz and Autri, more to follow - Coros are _like_ processes coro dbl { yield $_ * 2; yield $_; return }; my @x = 1..10; my %y = map &dbl, @x; # 2 => 2, 6 => 4, 10 => 6, ... coro perm (@x) { @x.splice(rand(@x),1).yield while @x; } my &p1 := &perm.start(1..10); my &p2 := &perm.start(1..20); p1(); p1(); p2(); p2(); coro foo { yield 42 }; (1..10).pick; coro foo ($x) { yield $x; yield $x+2; cleanup(); while (2) { while (1) { &?SUB.kill; # seppuku } } } # implicit falloff return + return() means startover without yielding # return() means yielding and restart + no implicit falloff (I LIKE THIS) &foo.finished; # true on return() and false on midway yield() foo(4); # and that's all she wrote coro foo ($x) { yield $x; # this point with $x bound to 10 yield $x+1; return 5; ... # this is never reached, I think we all agree } # If you don't want your variables to get rebound, use "is copy": coro foo ($x is copy) {...} # which is sugar for coro foo ($x) { { my $x := $OUTER::x; ...; # Further calls of &foo rebound $OUTER::x, not $x. } } sub foo { return undef if rand; ... } use overload { '&{}' => sub { ... } } class Coro is Conc::Multiplex does Code { method postcircumfix:<( )> { # start the thread, block stuff (we are in the caller's context) } } class Hash is extended { method postcircumfix:<( )> (&self: *@_) { &self = ./start(@_); } method start { # remember self # upon return() or normal falloff, restore self } } %ENV(123); &foo_continued := &foo.start(10); &foo.start(20); foo(10); # returns 10 foo(); # be "insufficient param" error or just return 11? foo(20); # returns 21 # continuation coros multi foo () { ...no rebinding... } multi foo ($x) { ...rebinding... } &foo.kill; my $first_ret = zoro( type => <even> ); &zoro.variant(:type<even>).kill; &zoro.variant(type => 'even').kill; zoro( type => <odd> ); zoro( even => 1 ); zoro( odd => 1 ); multi coro zoro ($type where 'even') {} multi coro zoro ($type where 'odd') {} multi coro zoro ($even is named) {} multi coro zoro ($odd is named) {} # iblech's thoughts: # Coroutine parameters should never be rebound. Instead, yield(...)s return # value is an Arglist object containing the new arguments: coro bar ($a, $b) { ...; my $new_set_of_args = yield(...); my $sum_of_old_a_and_new_a = $a + $new_set_of_args<$a>; ...; } bar(42, 23); # $a is 42, $b is 23 bar(17, 19); # $a still 42, $b still 19, # $new_set_of_args is \(a => 17, b => 19) Live in userland for the time being.
http://search.cpan.org/~lichtkind/Perl6-Doc/lib/Perl6/Doc/Design/S17.pod
CC-MAIN-2017-13
refinedweb
1,426
64.3
This Python programming challenge is adapted from a challenge on HackerRank called Ransom Note, which is part of a collection involving hash tables. If you are unfamiliar with HackerRank, you can read about it here: Introduction to HackerRank for Python Programmers. The problem descriptions on HackerRank are sometimes a bit obscure, and one of the skills you need to develop to solve the challenges is the ability to work out exactly what is being asked. Sometimes it’s easier to go straight for the input/output required to get a feel for what is required, and then read the description to see how it leads to the problem specification. The basic idea with the Ransom Note challenge is that you have two lists of values, and you need to determine if one list can be formed from elements in the other list. For example thinking of sentences as lists of words, give me one grand today night contains give one grand today, so the answer is Yes whereas for two times three is not four containing two times two is four, the answer is No, as although all the words are present in the second list, there are not enough instances of the word two. Make sense? Have a go for yourself now. At this stage, don’t worry about the efficiency of your solution – instead just go for a brute force approach to get a feel for the problem. Here’s a stub and some tests to get you started. The goal is to complete the checkMagazine() function to get the tests to pass. With assert tests, you will know they have passed if you run your code and get no AssertionError – i.e. nothing happens. This is good. Note that in the problem on HackerRank, the answer is printed as Yes or No rather than returned as a Boolean. def checkMagazine(magazine, note): pass magazine = "give me one grand today night".split() note = "give one grand today".split() assert checkMagazine(magazine, note) is True magazine = "two times three is not four".split() note = "two times two is four".split() assert checkMagazine(magazine, note) is False Brute Force Solution to Ransom Note Python Challenge Here’s my original attempt. Can you see what my thinking was? I tried to remove each item in note from message, but if this raised an exception, I set the return value to False. def checkMagazine(magazine, note): for word in note: try: del magazine[magazine.index(word)] except Exception as e: return False return True magazine = "give me one grand today night".split() note = "give one grand today".split() assert checkMagazine(magazine, note) is True magazine = "two times three is not four".split() note = "two times two is four".split() assert checkMagazine(magazine, note) is False Python Counters The solution above, and likely many other brute force solutions, passes most of the tests on HackerRank, but there are a few where it times out. We need to do better. There is a big hint in the fact that this challenge occurs in a collection abut hash tables. In Python this means we are probably going to use a dictionary. However, since this dictionary will contain counts of the various words in our lists, it makes sense to use the specialised dictionary type available in Python called Counter. You can see a Python Counter in action in the following example: from collections import Counter note = "give one grand today".split() note_counter = Counter(note) print(note_counter) If for any reason using a specialised tool such as collections.Counter if forbidden (e.g. you are studying a syllabus which doesn’t encourage “that kind of thing”), you can create a counter dictionary manually by doing something like this: magazine = "give me one grand today night".split() freq = {} for word in magazine: if word in freq: freq[word] += 1 else: freq[word] = 1 print(freq) Solution to Ransom Note Challenge in Python One final piece you might find helpful in writing an efficient solution to the Ransom Note challenge is the intersection operator as in Counter(a) & Counter(b). This returns the minimum of corresponding counts. With all that at your disposal, have a good attempt at the problem for yourself not, either using the stub and tests from above, or on the HackerRank site. Good luck. My solution is below for reference when you are ready. from collections import Counter def checkMagazine(magazine, note): mag_counter = Counter(magazine) note_counter = Counter(note) return mag_counter & note_counter == note_counter magazine = "give me one grand today night".split() note = "give one grand today".split() assert checkMagazine(magazine, note) is True magazine = "two times three is not four".split() note = "two times two is four".split() assert checkMagazine(magazine, note) is False In this post we have looked at the Ransom Note challenge from HackerRank, and how to solve it with Python. I hope you found it interesting and helpful. Happy computing!
https://compucademy.net/ransom-note-hackerrank-challenge-in-python/
CC-MAIN-2021-17
refinedweb
819
70.53
Auc¶ - class paddle.fluid.metrics. Auc(name, curve='ROC', num_thresholds=4095)[source] The auc metric is for binary classification. Refer to. Please notice that the auc metric is implemented with python, which may be a little bit slow. If you concern the speed, please use the fluid.layers.auc instead.. - Parameters - “NOTE: only implement the ROC curve type via Python now.” Examples import paddle.fluid as fluid import numpy as np # init the auc metric auc_metric = fluid.metrics.Auc("ROC") # suppose that batch_size is 128 batch_num = 100 batch_size = 128 for batch_id in range(batch_num): class0_preds = np.random.random(size = (batch_size, 1)) class1_preds = 1 - class0_preds preds = np.concatenate((class0_preds, class1_preds), axis=1) labels = np.random.randint(2, size = (batch_size, 1)) auc_metric.update(preds = preds, labels = labels) # shall be some score closing to 0.5 as the preds are randomly assigned print("auc for iteration %d is %.2f" % (batch_id, auc_metric.eval())) update(preds, labels) Update the auc curve with the given predictions and labels. - Parameters preds (numpy.array) – an numpy array in the shape of (batch_size, 2), preds[i][j] denotes the probability of classifying the instance i into the class j. labels (numpy.array) – an numpy array in the shape of (batch_size, 1), labels[i] is either o or 1, representing the label of the instance i. eval() Return the area (a float score) under auc curve - Returns the area under auc curve - Return type float get_config() Get the metric and current states. The states are the members who do not has “_” prefix. - Parameters None – - Returns a python dict, which costains the inner states of the metric instance - Return types: a python dict reset() reset function empties the evaluation memory for previous mini-batches. - Parameters None – - Returns None - Return types: None
https://www.paddlepaddle.org.cn/documentation/docs/en/api/metrics/Auc.html
CC-MAIN-2020-05
refinedweb
291
59.6
Traceback (most recent call last): File "/usr/local/bin/bitbake", line 23, in ? import bb maybe this is really obvious, but I have no idea why I am gertting this error or how to fix it. I'm on debian sid. the system standard python is 2.3. I changed the bitbake, bb** etc 1st lines from "python" to "python2.4" as suggested by the wiki, after I had been "getting cannot open conf/bitbake.conf errors" (I have had to rebuild my OE). apparently the thing wants "bb" whci refers to..what? bitbake? a bbfiles dir? a module for interpreting bbfiles? help? pointers? link to the undiscovered-by-me wikitopia from whose bourne no user returns uninformed?
http://www.oesf.org/forum/lofiversion/index.php/t12006.html
CC-MAIN-2017-09
refinedweb
118
70.09
Which semantic similarity measures in WordNet are available to use on Python with NLTK? Which semantic similarity measures in WordNet are available to use on Python with NLTK? On the web page "WordNet" Interface(), which introduces the command to use semantic similarity measures in WordNet on Python with NLTK, only the following 6 semantic similarity measures are listed: Path Similarity, Wu-Palmer Similarity, Leacock-Chodorow Similarity, Resnik Similarity, Jiang Similarity, and Lin Similarity. How about other semantic similarity measures: Li's Measures, feature-based, and hybrid measures? Are they not available to use on Python? According to this academic thesis "A Review of Semantic Similarity Measures in WordNet"(), semantic similarity measures are categorized into four classes in WordNet: path length based, information content based, feature-based, and hybrid measures. More detail, path length based measures include Shortest-path based Measure, Wu & Palmer's Measure, Leakcock & Chodrow's measure, and Li's Measures. Information content based measures include Resnik's Measure, Lin's Measure, and Jiang's Measure.') - Python Spacy NLP - TypeError: Argument 'string' has incorrect type (expected unicode, got str) I was getting the following error while i was trying to read a txt file in spacy. TypeError: Argument 'string' has incorrect type (expected unicode, got str) Here is the code below from __future__ import unicode_literals import spacy nlp= spacy.load('en') doc_file = nlp(open("example.txt").read()) - How find the most decisive sentences or words in a document via Doc2Vec? I've trained a Doc2Vec model in order to do a simple binary classification task, but I would also love to see which words or sentences weigh more in terms of contributing to the meaning of a given text. So far I had no luck finding anything relevant or helpful. Any ideas how could I implement this feature? Should I switch from Doc2Vec to more conventional methods like tf-idf? - How to label text data for training (Text classification) I have unlabelled data in text form (clinical notes). I want to classify whether these notes contains Personal information or not. For text Classification I need a training set. I want to know how can i go about labelling the data and how much data I will need to train the model effectively. The Personal information is mainly of 17 types. Thanks. - Python - Using GridSearchCV with NLTK I'm a little unsure as to how I can apply SKLearn's GridSearchCV to a random forest I'm using with NLTK. How to use GridSearchCV normally is discussed here, however my data is formatted differently to the standard x and y split. Here is my code: import nltk import numpy as np from nltk.classify.scikitlearn import SklearnClassifier from nltk.corpus.reader import CategorizedPlaintextCorpusReader from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC reader_train = CategorizedPlaintextCorpusReader('C:/Users/User/Documents/Sentiment/machine_learning/amazon/amazon/', r'.*\.txt', cat_pattern=r'(\w+)/*', encoding='latin1') documents_train = [ (list(reader_train.words(fileid)), category) for category in reader_train.categories() for fileid in reader_train.fileids(category) ] all_words = [] for w in reader_train.words(): all_words.append(w.lower()) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:3500] def find_features(documents): words = set(documents) features = {} for w in word_features: features[w] = (w in words) return features featuresets_train = [(find_features(rev), category) for (rev, category) in documents_train] np.random.shuffle(featuresets_train) training_set = featuresets_train[:1600] testing_set = featuresets_train[:400] RandFor = SklearnClassifier(RandomForestClassifier()) RandFor.train(training_set) print("RandFor accuracy:", (nltk.classify.accuracy(RandFor, testing_set)) *100) This code, instead of producing a conventional x and y split, produces a list of tuples, where each tuple is in the following format: ({'i': True, 'am': False, 'conflicted': False ... 'about': False}, neg) Is there a way to apply GridSearchCV to data in this format? - Scipy Crashes on Apache Server I have a Python 2.7 script using NLTK that runs fine on command prompt. When I ran it in a local Apache server on the same machine, it crashes and the error log shows "NotImplementedError: cannot determine number of cpus" while loading scipy. More detailed messages are shown below. Anyone else have the same problem? ..... [Thu Aug 09 18:07:50 2018] [error] [client ::1] File "interpnd.pyx", line 1, in init scipy.interpolate.interpnd\r, referer: [Thu Aug 09 18:07:50 2018] [error] [client ::1] File "C:\Anaconda2\lib\site-packages\scipy\spatial\__init__.py", line 95, in \r, referer: [Thu Aug 09 18:07:50 2018] [error] [client ::1] from .ckdtree import *\r, referer: [Thu Aug 09 18:07:50 2018] [error] [client ::1] File "ckdtree.pyx", line 31, in init scipy.spatial.ckdtree\r, referer: [Thu Aug 09 18:07:50 2018] [error] [client ::1] File "C:\Anaconda2\lib\multiprocessing\__init__.py", line 136, in cpu_count\r, referer: [Thu Aug 09 18:07:50 2018] [error] [client ::1] raise NotImplementedError('cannot determine number of cpus')\r, referer: [Thu Aug 09 18:07:50 2018] [error] [client ::1] NotImplementedError: cannot determine number of cpus\r, referer: - How to See Python Error Messages for Scripts Running on Apache Server? I have a server side Python script that imports a big package called nltk. It ran at a command prompt, but would not run in Apache server. I tried the logging and put it as the first import and created a log file immediately after. But nothing is written to the file before the script crashes. Is there a way to see the "ImportError: no module ..." when the script runs on Apache? - Reorder strings using similarity score algorithm Re-orders a set of strings { buzz, fuzz, jazz, fizz..}so that sum of similarity scores between each pair of adjacent strings is the lowest. buzz-> fuzz (1) fuzz-> jazz (2) jazz-> fizz (2) sum of the scores is 5. If reordered based on lowest(4) final output is { buzz, fuzz, fizz, jazz..} buzz-> fuzz (1) fuzz-> fizz (1) fizz-> jazz (2) My approach is to find Edit distance for every pair of strings and construct a weighted graph where edge represents the edit distance value. Use DFS to find the lowest path. Is this the efficient solution? can it be done any better? - Efficient way to replace substring from list Hi I have a large document saved as a sentence and a list of proper names that might be in the document. I would like to replace instances of the list with the tag [PERSON] ex: sentence = "John and Marie went to school today....." list = ["Maria", "John"....] result = [PERSON] and [PERSON] went to school today as you can see there might be variations of the name that I still want to catch like Maria and Marie as they are spelled differently but close. I know I can use a loop but since the list and the sentence is large there might be a more efficient way to do this. Thanks - Semantic similarity among documents to do clustering in Python I have around 1000 documents(text like paragraphs). I want to find similarities among the documents to cluster the documents. Finally, I want to do hierarchical clustering. I want to implement in Python. How to proceed for this. - Is there any api to get all words regarding a particular topic available in english grammar I am working on NLP with python and my next step is to gather huge-huge data regarding specific topics available in English grammar. For example : all words that can define a "Department" say "Accounts". So can any tell me how I can gather such data (if possible, through any API). - Looping through Lemmas in NLTK Wordnet Have a script for getting italian synonyms from Wordnet like this: from nltk.corpus import wordnet as wn it_lemmas = wn.lemmas("problema", lang="ita") hypernyms = it_lemmas[0].synset().hypernyms() print(hypernyms[0].lemmas(lang="ita")) When I do the looping I get message that list indices must be integers or slices, not Lemma How should I do the looping to get not only one value ([0]) but all the values in this dictionary (the amount can be different) and print them all? - How to get Only Part of speech From POS_TAG in python Hi every body I wan to get Pos_tag only like "jj" etc of a word . how to get this from list of post_tag . I am able to print this result : list1=nltk.pos_tag(words) print(list1) >>[('good', 'JJ')] Now My question is now to separate word and post tag from the above result list. I want to store word in myword variable and jj to mypos variable Please store good and jj into two different variable and print separate
http://quabr.com/51277827/which-semantic-similarity-measures-in-wordnet-are-available-to-use-on-python-wit
CC-MAIN-2018-34
refinedweb
1,426
56.76
IRC log of swbp on 2005-02-23 Timestamps are in UTC. 15:57:48 [RRSAgent] RRSAgent has joined #swbp 15:57:48 [RRSAgent] logging to 15:57:59 [Ralph] Meeting: SWBPD RDF-in-XHTML TF 15:58:23 [Ralph] Agenda: 15:58:29 [Ralph] Chair: Ben Adida 15:59:00 [Zakim] Zakim has joined #swbp 15:59:55 [Ralph] Regrets: Mark Birbeck, Jeremy Carroll 16:01:36 [Ralph] phooey 16:01:46 [Ralph] conf code overlap 16:01:59 [Ralph] I see 16:02:29 [Zakim] SW_BPD(html)11:00AM has now started 16:02:32 [Ralph] fixed 16:02:34 [DanC] DanC has joined #swbp 16:02:36 [Zakim] +Ralph 16:02:40 [Zakim] +Dom 16:02:51 [Zakim] +DanC 16:03:33 [DanC] DanC has changed the topic to: RDF/XHTML 23 Feb 16:04:17 [DanC] agenda? 16:04:20 [DanC] agenda + status of RDF/A 16:04:23 [Zakim] +Ben_Adida 16:04:26 [DanC] agenda + GRDDL 16:04:36 [DanC] agenda -2 16:04:37 [DanC] agenda -1 16:04:47 [DanC] agenda + status of RDF/A [Ben_Adida] 16:04:54 [DanC] agenda + GRDDL [Ben_Adida] 16:04:58 [DanC] agenda + TAG update 16:05:07 [DanC] agenda + TP Prep 16:05:17 [benadida] benadida has joined #swbp 16:05:24 [Ralph] zakim, who's here? 16:05:24 [Zakim] On the phone I see Ralph, Dom, DanC, Ben_Adida 16:05:25 [Zakim] On IRC I see benadida, DanC, Zakim, RRSAgent, dom, Ralph 16:05:35 [DanC] Regrets: JeremyC, MarkB 16:05:51 [DanC] Zakim, agenda? 16:05:51 [Zakim] I see 4 items remaining on the agenda: 16:05:53 [Zakim] 1. status of RDF/A [from Ben_Adida via DanC] 16:05:55 [Zakim] 2. GRDDL [from Ben_Adida via DanC] 16:05:56 [Zakim] 3. TAG update [from DanC] 16:05:57 [Zakim] 4. TP Prep [from DanC] 16:06:25 [DanC] Zakim, take up item 1 16:06:25 [Zakim] agendum 1. "status of RDF/A" taken up [from Ben_Adida via DanC] 16:06:45 [Ralph] Ben: the last official word about RDF/A was at the W3C AC meeting the start of December 16:07:11 [Ralph] ... according to Steven Pemberton, the XHTML 2.0 Last Call WD is dependent upon getting RDF/A into the WD 16:08:28 [Ralph] DanC: WGs are supposed to publish something every 3 months and the HTML WG is past that time 16:09:24 [DanC] (hmm... roadmap update? nope. $Date: 2004/09/15 10:54:53 $ ) 16:12:04 [Ralph] Ben: should the RDFHTML TF take up GRDDL? 16:12:13 [Ralph] DanC: yes! I am shopping this around to various communities 16:12:30 [Ralph] ... specific customers include RDDL 16:13:21 [Ralph] -> Resource Directory Description Language 16:13:34 [Ralph] DanC: RDDL uses XML and XLink 16:13:48 [Ralph] ... introduces terms 'nature' and 'purpose' 16:14:23 [Ralph] ... DTD is a related resource; RDDL would say 'has nature DTD' 16:14:53 [Ralph] ... natures are like rdf Classes, purposes are like rdf Properties 16:15:57 [Ralph] ... Henry Thompson, editor of XML Schema spec, responsible for the W3C XML Schema validation service, has added RDDL support to the validation service 16:16:20 [Ralph] ... so the W3C XML Schema validator will follow pointers from namespace documents using RDDL 16:16:38 [Ralph] ... this makes RDDL a useful case for GRDDL 16:16:55 [Ralph] ... there exist transformations from RDDL to RDF 16:17:15 [Ralph] ... Henry has swapped this in in the context of TAG discussions 16:17:45 [Ralph] Dom: GRDDL is currently published as a Coordination Group Note 16:18:13 [DanC] (er... RDDL was one of N) 16:18:13 [Ralph] ... one goal for taking GRDDL into the RDFHTML TF is to give it more standing 16:19:02 [Ralph] -> Gleaning Resource Descriptions from Dialects of Languages (GRDDL) 16:19:23 [Ralph] DanC: unclear if "CG Note" status is enough standing for the RDDL community 16:20:57 [DanC] -> Creative Commons GRDDL story / demonstration Eric Miller, C. M. Sperberg-McQueen 15 October 2004, rev. 2 February 2005 (in progress) 16:22:24 [DanC] -> Integrating Data from Multiple XML Schemas with GRDDL and RDF (slides, by DanC, in progress) 16:22:50 [Ralph] DanC: (continuing on applications)... trackback 16:23:16 [Ralph] ... trackback has an RDF idiom 16:23:22 [Ralph] ... uses RDF in XML comments 16:23:38 [Ralph] ... would be nice to take the comment markup out and use XSLT 16:24:29 [Ralph] ... for these users, making the containing document be XML is apparently too high a barrier 16:24:56 [TomSaywer] boy... trackback in wordpress with GRDDL... fun fun fun! 16:25:44 [Ralph] ... also SHOE community 16:26:15 [Ralph] ... really, SHOE/DAML/etc. 16:26:27 [Ralph] ... some DAML users are still out there 16:27:14 [Ralph] ... Creative Commons 16:27:27 [DanC] (and FOAF? DOAP? DOML?) 16:27:35 [Ralph] Ben: most Creative Commons uses are Dublin Core with some additional properties 16:27:53 [Ralph] ... but most people will not change the profile attribute in the document head 16:28:35 [Ralph] Ralph: Eric Miller and Michael Sperberg-McQueen are working on a proposal that would permit the GRDDL profile to be named in the XML Schema 16:28:42 [Ralph] ... that still doesn't cover the trackback case 16:28:52 [Ralph] Dom: another option is an HTTP header 16:28:57 [DanC] (hmm... are there wordpress plug-ins for creative commons? Joe Lambda's blog could model all this cool stuff) 16:29:27 [Ralph] ... also could recommend to implementors of GRDDL processors to implement some default behaviors for given doctypes 16:29:45 [DanC] (guerrilla standardization ;-) 16:30:19 [Ralph] Dom: e.g. recommend to GRDDL implementors to apply certain transforms automatically to XML pages 16:30:36 [Ralph] Ben: like finding rel='license' 16:30:42 [Ralph] DanC: I won't be party to that 16:30:54 [Ralph] Ralph: yeah, ugh 16:33:44 [Ralph] Ben: what about proposing to the WG to publish GRDDL as a WG Note? 16:34:29 [Ralph] DanC: who would benefit from this? Not clear that the RDDL community feels strongly that the current status is insufficient. 16:35:03 [Ralph] Dom: there is some work that I as editor would like to do to the spec 16:35:49 [Ralph] ... and if it is republished, I think it should be as a WG document (e.g. WG Working Draft), not as a CG document 16:36:19 [Ralph] Ben: Recommendation status would cement this approach as more than "just a patch" 16:39:17 [DanC] agenda + GRDDL test suite (at least FYI) 16:39:24 [Ralph] Ralph: perhaps a formal WG Working Draft is a necessary step to put, e.g. the Dublin Core community, on notice to give formal feedback 16:39:39 [DanC] agenda + XTech milestone 16:39:49 [Ralph] ACTION: Ralph query Tom Baker about DCMI interest in GRDDL as a solution 16:40:27 [Ralph] Ralph: I support bringing GRDDL into the WG 16:40:43 [Ralph] -> [ALL] IMPORTANT: ftf preparations and agenda outline 16:41:56 [DanC] Zakim, take up item 4 16:41:56 [Zakim] agendum 4. "TP Prep" taken up [from DanC] 16:42:12 [DanC] Ben: hmm... I have a conflict with that time on Thu... any flexibility? I'll look into it 16:42:13 [Ralph] Ben: I have a conflict with the 1400-1530 Thursday f2f slot 16:44:38 [Ralph] DanC: the WG could discuss GRDDL outside of the HTML WG joint discussion but it would be nice to have them present 16:45:01 [dom] Ralph: what input do you expect from the HTML WG on GRDDL? 16:45:08 [Ralph] ACTION: Ben ask Mark and Steven for documents that should be reviewed prior to our f2f 16:45:13 [dom] DanC: e.g. whether there is enough space in this town 16:45:29 [DanC] agenda? 16:45:31 [dom] Dom: note that RDF/A is technically a subset of GRDDL 16:45:41 [DanC] Zakim, close item 2 16:45:41 [Zakim] agendum 2 closed 16:45:42 [Zakim] I see 4 items remaining on the agenda; the next one is 16:45:44 [Zakim] 3. TAG update [from DanC] 16:45:47 [DanC] Zakim, close item 4 16:45:47 [Zakim] agendum 4 closed 16:45:47 [Zakim] I see 3 items remaining on the agenda; the next one is 16:45:48 [Zakim] 3. TAG update [from DanC] 16:45:54 [DanC] Zakim, close item 3 16:45:54 [Zakim] agendum 3 closed 16:45:55 [Zakim] I see 2 items remaining on the agenda; the next one is 16:45:57 [Zakim] 5. GRDDL test suite (at least FYI) [from DanC] 16:46:25 [Ralph] zakim, take up agendum 5 16:46:25 [Zakim] agendum 5. "GRDDL test suite (at least FYI)" taken up [from DanC] 16:46:40 [Ralph] Dan: Dom has done some work on a test suite 16:46:57 [Ralph] ... this would be particularly important for REC-track work 16:47:10 [DanC] -> Start of a GRDDL Test Suite Dominique Hazaël-Massieux (Wednesday, 2 February) 16:47:47 [DanC] -> XTech 16:48:05 [Ralph] DanC: Dom's proposal to talk about GRDDL at XTech was accepted 16:48:37 [DanC] 25-27 May 16:49:57 [DanC] agenda + GRDDL REC Track? [Ralph] 16:50:01 [DanC] Zakim, take up agendum 7 16:50:01 [Zakim] agendum 7. "GRDDL REC Track?" taken up [from Ralph via DanC] 16:50:25 [DanC] Ralph: enough time in the BP chartered duration? 16:51:19 [Ralph] Dan: from an architectural point of view, if the profile points to something then we can find license for the extracted RDF 16:51:28 [Ralph] ... this does not oblige anyone on the receiving end 16:51:45 [Ralph] ... e.g. "if I gather data from the Web, should I know about GRDDL?" 16:51:58 [Ralph] ... MSpace from Southampton is an example 16:52:08 [dom] s/MSpace/mSpace/ 16:52:11 [Ralph] ... should MSpace slurp up GRDDL documents? 16:52:22 [Ralph] s/MSpace/mSpace/ 16:52:45 [Ralph] ... if GRDDL were a W3C Recommendation it would be a clear statement to mSpace-like developers 16:53:14 [Ralph] ... so this Rec? question is really "is GRDDL best practice for publishing data in the Web"? 16:55:04 [Ralph] Ralph: what unresolved issues may still exist in GRDDL? 16:55:09 [Ralph] DanC: reuse of fragment identifiers 16:55:37 [DanC] <baseball#patek> :avg .325. 16:56:13 [Ralph] ... some readings of the HTML spec say #patek is a piece of HTML markup 16:58:15 [DanC] (hmm... CR might be just the signal we need to get DC, CC, RDDL, etc. to vote with their feet) 16:59:25 [Ralph] PROPOSE to take GRDDL to SWBP as Rec-track 16:59:36 [Ralph] DanC: are we quorate to make this decision here? 16:59:56 [Ralph] ACTION: Ben put the GRDDL to Rec? question to the TF mailing list 17:01:20 [DanC] (do give a clear deadline. 7 days is traditional, but given the meeting next week, 3 working days seems fair.) 17:02:07 [DanC] agenda? 17:02:17 [DanC] Zakim, close item 5 17:02:17 [Zakim] agendum 5 closed 17:02:19 [Zakim] I see 2 items remaining on the agenda; the next one is 17:02:19 [DanC] Zakim, close item 6 17:02:20 [Zakim] 6. XTech milestone [from DanC] 17:02:20 [DanC] Zakim, close item 7 17:02:21 [Zakim] agendum 6 closed 17:02:24 [Zakim] I see 1 item remaining on the agenda: 17:02:27 [Zakim] 7. GRDDL REC Track? [from Ralph via DanC] 17:02:28 [Zakim] agendum 7 closed 17:02:29 [Zakim] I see nothing remaining on the agenda 17:03:43 [dom] regrets from me for the TF during WG F2F 17:03:47 [dom] (I have a conflicting meeting) 17:04:38 [DanC] (ouch!) 17:05:06 [Zakim] -Ben_Adida 17:09:28 [danbri] danbri has joined #swbp 17:12:12 [Zakim] -Dom 17:13:05 [Zakim] -Ralph 17:13:06 [Zakim] -DanC 17:13:06 [Zakim] SW_BPD(html)11:00AM has ended 17:13:07 [Zakim] Attendees were Ralph, Dom, DanC, Ben_Adida 17:13:15 [DanC] 2005-02-26 lv ORD 17:29 ar BOS 20:46 Saturday AMERICAN AIRLINES #874 17:13:20 [DanC] -- 17:13:31 [Ralph] rrsagent, bye 17:13:31 [RRSAgent] I see 3 open action items: 17:13:31 [RRSAgent] ACTION: Ralph query Tom Baker about DCMI interest in GRDDL as a solution [1] 17:13:31 [RRSAgent] recorded in 17:13:31 [RRSAgent] ACTION: Ben ask Mark and Steven for documents that should be reviewed prior to our f2f [2] 17:13:31 [RRSAgent] recorded in 17:13:31 [RRSAgent] ACTION: Ben put the GRDDL to Rec? question to the TF mailing list [3] 17:13:31 [RRSAgent] recorded in
http://www.w3.org/2005/02/23-swbp-irc
CC-MAIN-2020-45
refinedweb
2,189
68.2
Posts tagged loop Continue and break, with and without labels, for the SCJP0 Looping. Looping in Java, a brief look at the various loops and how they can be applied0) Java; the for-each loop, perfect for fondling collections and arrays0 The for loop is great, but is it really that nice when you want to iterate over an array or collection? You’d have to do something like the following : import java.util.ArrayList; import java.util.List; public class MonkeySniffer { public static void main(String[] args) { List<String> myList = new ArrayList<String>(); myList.add("Hello"); myList.add("James"); myList.add("Elsey"); for (int i = 0; i < myList.size(); i++) { System.out.println(myList.get(i)); } } } Works fine doesn’t it? Looks a bit messy doesn’t it? No need for the index since we only ever use it to get the current value out of the list. Lets have a look at the for-each loop (often called the enhanced for loop) and how it can help The do-while loop, always executes at least once…1 : Using while loops, for when you don’t know how long the piece of string is…1 Just another bitesize SCJP post here, looking at the while loop in Java. There may be times, where you need to continually iterate over a code block for an unknown number of times. What you can do, is to loop through a code block while a condition is true, something else might change that condition, and when this happens, you can exit out of the loop. In a psuedo-code way, this is how the while loop operates : while (boolean_expression) { //do stuff here } The boolean_expression, as the name suggests must equal a boolean value. All of the following examples are valid Playing with the For loop in Java…3 The for loop is an extremely flexible and powerful way of iterating over a code block for a set number of times. Generally speaking, this type of loop is great for situations when you need to repeat code for a definitive number of times, for example you know that your shopping basket contains 10 items, so you need to repeat the loop body 10 times, to iterate through each item and print it onto the screen. In order to use this loop, you must first create the for loop, and then specify the code block that is used for iteration. Lets take a look at a brief bit of psuedo code.
http://www.jameselsey.co.uk/blogs/techblog/tag/loop/
CC-MAIN-2013-48
refinedweb
416
66.67
endutxent, getutxent, getutxid, getutxline, pututxline, setutxent - user accounting database functions [XSI] #include <utmpx.h>. None. XBD <utmpx.h> First released in Issue 4, Version 2. Moved from X/OPEN UNIX extension to BASE. Normative text previously in the APPLICATION USAGE section is moved to the DESCRIPTION. A note indicating that these functions need not be reentrant is added to the DESCRIPTION. In the DESCRIPTION, the note about reentrancy is expanded to cover thread-safety. Austin Group Interpretation 1003.1-2001 #156 is applied. POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0090 [213,428] and XSH/TC1-2008/0091 [213] are applied. return to top of pagereturn to top of page
http://pubs.opengroup.org/onlinepubs/9699919799/functions/getutxline.html
CC-MAIN-2013-20
refinedweb
112
61.63
numbered paragraphs in Word By Jury, in AutoIt General Help and Support Recommended Posts Similar Content - By anoig Hi all, First, I want to give a huge shout-out to the community. I'm completely self-taught, and have never had to actually ask a question before because the forum is that good at answering questions and explaining things. However, I'm kind of stumped here, and I've been stuck on this problem for almost a full day. I'm working on a script to populate drafts of deeds at work. I have the main GUI and a function (ctrl($n) and read()) for adding fields to find and data to replace it with to an array for later use with _word_docfindreplace. All of that works. However, due to the way I have the forms set up, I need to create additional fields and info based on the data that's there. Specifically, if there's only one buyer, I need to add the field [Buyer1&2] and the data in $aArray_Base for [Buyer 1]. I also need to add a field [Buyer 2] and have a blank data set in the next column over in the array, and I need to do the same for the Seller. To this end, the function parties() sets boolean variables $2buyers and $2sellers accordingly. Then, I have buyers() and sellers() to populate the data. The problem that I'm running into is that each function works... when ONLY the buyer 1 name field is filled, and when ONLY the seller 1 field is filled. So if I fill Buyer 1 Name and save it, the data is populated correctly. But when I fill Buyer 1 and Seller 1 name, only the buyer 1 data populates correctly. Worse, when I fill several fields, neither populate correctly. I have no idea why this happens. I've added messageboxes to debug throughout the entire process and can't pinpoint what's causing the issue. The entire script is below. The function(s) in question are buyers() and sellers(). Only Sellers() has messageboxes throughout. Can someone help walk me through what might be causing this and help me find a solution? Thanks a ton in advance, and sorry for the wall of text. -Anoig - By water My computer has been upgraded from Office 2010 to Office 2016. Are there any features of Office 2013 or Office 2016 which you now want to see in the Excel, Word or Outlook UDF? - By FrancescoDiMuro Good morning everyone I was talking about this UDF in this thread, but for not confusing "arguments", I opened a new thread... So, the issue I'm having, is the format of the table, that you can see in the image above... Can you help me out please? Thanks Text of the previous post: Local $oWord = _Word_Create(False) If @error Then MsgBox($MB_ICONERROR, "Errore!", "Errore durante la creazione dell'oggetto Word." & @CRLF & "Errore: " & @error) Else ; The field alreay exists, but would be nice if I can create it... I thought at _FileCreate() :) Local $oDoc = _Word_DocOpen($oWord, $sCartellaModelli & "\Esportazione_Modello.doc") If @error Then MsgBox($MB_ICONERROR, "Errore!", "Errore durante l'aggiunta di un nuovo documento Word." & "Errore: " & @error) Else Local $oRange = _Word_DocRangeSet($oWord, 0) If @error Then MsgBox($MB_ICONERROR, "Errore!", "Errore durante il settaggio del range nel documento Word." & @CRLF & "Errore: " & @error) Else _Word_DocTableWrite($oRange, $aListView, Default) If @error Then MsgBox($MB_ICONERROR, "Errore!", "Errore durante la creazione della tabella." & @CRLF & "Errore: " & @error & "Informazioni: " & @extended) Else _Word_DocSaveAs($oDoc, $sCartellaEsportazioni & "\Esportazione_di_prova.doc") EndIf EndIf EndIf EndIf And this is the actual result: - By Shane0000 a Shift + Enter line break for Word 2007. is chr(11) #include <word.au3> $sText = 'Test' & @LF & 'Test' & @CR & 'Test' & @CRLF & 'test' & chr(10) & 'test' & chr(13) & 'test' $oWord = _Word_Create() $oDoc = _Word_DocAdd($oWord) $oRange = $oDoc.Range $oRange.InsertAfter ($sText) $iEnd = $oRange.End _Word_DocSaveAs($oDoc,'c:\test.doc') hehe dancing all around chr(11) -
https://www.autoitscript.com/forum/topic/188794-numbered-paragraphs-in-word/
CC-MAIN-2019-13
refinedweb
649
65.93
See also: IRC log Scribes: Mon AM dorchard; Mon PM Henry; Tue PM Norm; Wed AM Noah skw: We should be able to close a few things, like passwords in the clear. ... there are a few other things that have been sitting on Henry's shoulders, perhaps we need another volunteer for those. nm: I think that self describing web is close. jar: link header/redirections? tbl: concerned about the HTML WG and perhaps we are getting all lost in the weeds. ... perhaps the issue of tagsoupintegration-54 issue is our biggest issue. ... should we focus much more on that ht: there's a very real possibility that a document will hit the director's desk and he will say no and that would be a catastrophe. ... we need to think about that and fixing it now. tbl: I don't think that the director saying no is a possibility. <timbl_> That isn't the way the concortium works -- but the fear for me is real as i expressed in the AC talk dorchard: I agree with tagSoup is our biggest issue and I'm comfortable with this being our highest priority jar: thinking about efficiency. Too early or too late are too inefficient. We should try to be more efficient about when/where we get involved. ... seems like we are hands off during development and then when we review things it's too late. skw: it's not reasonable to do every review. nm: it's hard to know when to review things. <DanC_lap> (hmm... a big architecture diagram that shows where the various deliverables fit are something that Tim has tried to do now and again... I'm pretty interested in it lately.) tbl: sometimes things just fall on the floor, sometimes people get excited. skw: perhaps look at first public working draft. dan: as Jon Bosak says trying to get everything co-ordinated is a n^2 problem. ... staff tries to at least read the abstract of every fpwd during an all hands ... perhaps we should look at diagram nm: let's try to get to tagSoup early in our agenda ... relationship between self describing web finding goal and a talking point for tagsoup integration.. ... maybe this is a talking point in the debate ndw: xml cg looks at TR page for new things. <DanC_lap> (yeah... LC is the wrong time to start looking at something. FPWD is designed to flush out peer reviewers) ndw: often first wd is too early. tbl: but maybe fpwd is just right <noah> DO: We don't need process to solve this. <noah> DO: If we're close enough to the community, we'll see the important things. <Norm> For TAG-level concerns, FPWD might be fine. jar: there are other things between ad-hoc and totally thorough. ... here are some of the issues we tend to get involved in: naming, binding, etc. ... use those criteria and a systematic approach will emerge ht: Henri says that you can't do what you want to do because the doms don't do the right thing. ... Henri says they don't think that namespaces are important <noah> scribenick: noah AM: Did they tell you why ":" does not work for technical reasons? <dorchard> scribenick: dorchard <scribe> scribenick: noah HT: Not quite the right question. They don't say it doesn't work, they say it violates a "consistency principle" in the DOM. I'm not sure I'm comfortable just accepting that all the principles they've adopted are not subject for debate. <dorchard> scribenick: dorchard ht: there's work to be done. I have more work to do. nm: how are we going to spend the next time. skw: we haven't given aria a clear answer yet. ... how close can we get to that? dorchard: I also raised the issue #41 to HTML WG on distributed extensibility. skw: tbl's talk at AC meeting? dorchard: that needs to get out in a lot more of an approachable form skw: could tim show those to the tag here? ... and what are we going to tell aria? nm: we still have forest/trees and let's keep our eye on the big issue: html vs xml goals/aims. <noah> Helping the HTML and XML communities to find the right balance between convenience and distributed extensibility, and between integating (XML and HTML) vs. remaining separate, are the huge issues that will change the future of the Web. The ARIA response is important and we should get that right too, but it's narrower and ultimately less important. ht: aria is trying to get an answer from our questions.. skw: we need to see if we've asked the right people to do things <DanC_lap> (ah... found the msg from hsivonen "A small number of parties can take names from a single pool on a first-come-first-served basis." I'm inclined to bring it up during or after tim's presentation) ht: What I propose by the aria: approach to take then aria- approach and only change the - to a : and say if you must declare the aria prefix. It's not proposing a full ns approach. ... it is a compromise. ... from their perspective, almost every language has a fixed namespace prefix. ... that's an important technical detail. ... and the only approach that might fly. nm: doesn't preclude doing full ns later. ht: I think we are done with aria for this meeting. tbl: I gave a presentation at the beijing AC meeting. ... I'm vetting this for public exposure. <DanC_lap> ("divergence" seems like an odd turn of phrase; it suggests HTML and XML were originally converged.) tbl: each version of html is moving further away from xml, with browser dependencies creeping in. ... the thesis to what extent are we going to sacrifice compatibility with the past for the future. ... html wg has not rejected namespaces, but also that there are some actively against namespaces in html ... xml issues.. ... the ones at the front are where I'd be happy to change xml. <DanC_lap> (I argued against the necessity of quotes in XML before XML became a REC... but after the WG had made its decision; hence my argument was out of order. also, apparently I was in the minority thinking of using XML for HTML back then. ) (history has vindicated you!) tbl: why namespaces anyway <DanC_lap> (well, sorta. it's arguable that the process that got us XML was at least as important as the technical result.) tbl: rdf has a supremely extensible model. ... have namespaces ever been useful for non-rdf? <DanC_lap> (is there serendipitous reuse of XML vocabularies?) dorchard: I don't quite understand this question because xml using namespaces is being deployed *ALL OVER* danc: what about when you smash 5 languages together? tbl: One of the reasons why the HTML WG doesn't think about scale is because HTML is the #1 language. ... because there is a scale of deployment, #1 is HTML then #2 then #3 etc. ... whereas namespaces treats everything equally. <DanC_lap> (another relevant quote: Hickson: "For this, though, we actively want to make sure that people can't willy nilly extend the language without coordination with anyone interested in the development of the language" -- ) tbl: and these don't match. We shouldn't make html do the same thing as the nth language. <DanC_lap> slide 7 s/fro/for/ tbl: We have to engineer this to work with large and small communities. nm: this slide also needs to add xml <DanC_lap> (re "not the only language", do the other languages have to be allowed in the syntax of HTML?) tbl: the conservative validator means that fixing bunchs of mistakes don't help you until they are all done. ... the liberal browser doesn't force you to fix anything. ... we need to have a motivating slope of reward vs bug fixes. <DanC_lap> spell-o Extensability slide 17 tbl: fixing web pages ... would like TAG to recommend the view source/save as show cleaned up source. dorchard: I've seen a lot of sites that do SEO/Google adwords/analytics/sitemaps analysis and validation.. tbl: XML meet HTML halfway, XML 2.0 ht: note that the xml community is not asking for these changes tbl: the xml community will be asking for this when html blows past them... nm: note that the target is that lots and lots of people need to understand it. tbl: social modularization, html wg is a big group and should be more modular dorchard: W3C is going the opposite way. I sent in an AC comment that disagreed with the waf/apis group becoming one. <timbl_> discussion of tbl's slides nm: there are lots of mixing and matching that is going on. many languages that are designed for that. discussion on non-rdf use of namespaces nm: wsdl is an example ... and xml:lang ... this stuff is happening and generating value. ht: and SOAP nm: Tim poked on the validator issue. ht: xslt nm: the communities out there may just want some of the things like relying on strict validation. tbl: and they use URIs in an automated way? ht: the w3c site is being hammered by follow your nose to static documents <Norm> In the context of XSLT (and XProc, I think), arbitrary schemes definitely *do* get mixed together. dorchard: Michael Kay and others pushback on Noah and my support for Schema ##defined is that schemas will be mixed in, causing action at a distance errors. ht: also encryption and dsig ... first element has half a dozen ns declarations at the top ... within a corporation each division/unit has it's own ns and then the master schema brings them together. ... some momentum around a different model of modularization which is nvdl. <DanC_lap> (I get the impression that XML as a whole isn't a space where namespaces mix and match well, but there are other areas like RDF: XSLT, SOAP, ...) ht: you don't have to specify how they connect. ... you write an nvdl document to validate a subtree. tbl: this seems very tortured. ht: it's meeting a need tbl: this means there is a strong need for modularization ... what if relaxng what used uris? ht: people that like relaxng tend to like nvdl. tbl: there will be lots of overlap in tags... ht: do we change xml to make some version of html more universal or do we tell a story about html and xml interaction. ... this might ignore value of the huge community using xml. ndw: multiple roots, getting rid of dtds, embedding are important ht: ndw's point is what xml cares about, not much in the list of what html needs ... also UBL using namespaces for versioning. nm: also Atom tbl: do the atom readers and writers put namespace. ndw: there have been some reports that readers don't actually do namespace validation.. nm: i think it does require namespace decls. ht: there are 2 design patterns of namespace mixing. dc: I'd like to talk about extensibility in hypertext ... Henri said that there are 4 parties (browsers vendors) that can change the way browsers work, the platform is finished. <DanC_lap> (ah... found the msg from hsivonen "A small number of parties can take names from a single pool on a first-come-first-served basis." I'm inclined to bring it up during or after tim's presentation) now looking at Henri's response to my issue, #182 tbl: part of this is the html wg acting like a monopoly. dc: don't forget about the authors who just want "html" and want it to work everywhere. nm: there are consumers other than browsers. why not have search engines index svg, etc. ... those are important consumers ht: firefox ships namespace extensibility. you can install extensions that take over when certain ns hit. ... this is how xforms is implemented in firefox, using C and/or javascript now. ndw: would be nicer if the world wasn't bifurcated. The number of people who care about something that doesn't show up on the screen is vanishingly small. dc: people do learn about different tags for same representation, such as headings instead of bold, in SEO class. tbl: sometimes html ought to be the extension. ... why couldn't svg using html anchors, divs, etc. skw: when we finish the agenda item we need actions.. jar: we may need to look at namespaces, not xml nm: there's a tradeoff between locked down like xml versus not locked down. ... hard part is to find a sweet spot. dorchard: I asked Henri and Anne if a new version of XmL that was as HTML friendly as possible would be acceptable to HTML WG, and they demurred. <ht> Scribe: Henry S. Thompson <ht> ScribeNick: ht SW: Resuming after lunch Close ACTION-141 <trackbot-ng> ACTION-141 Henry to circulate the document with a cover note that expresses that the TAG now has a working hypothesis that the colon is technically feasible and invites continuing discussion. closed SW: Resuming topic tagSoupIntegration-54, and in particular ARIA issues NW: Thinking about the big picture, the technical solutions may well be there, but the hard question is motivating the major players to adopt one of them ... And I don't know how to do that NM: It's important to remember that's the important thing SW: So beyond returning to this when we get feedback from Al Gilman on next steps, anything else? DO: What about TBL's XML namespace proposal from the AC meeting -- should we write this up as a proposal? ... Would that help NW: I don't think so -- the community isn't ready for that level of detail ... The community isn't interested in a solution SW: What community? NW: There's an HTML community which has one <-related syntax, and the XML community which has another, and TBL's desire that we'd be better off if we could unify these, both in terms of language development and in terms of authoring NM: The communities basically know their own requirements ... The problem is that XHTML is out in a corner, relative to the bulk of the XML usage out there, which is very conservative (as NW pointed out, they rejected XML 1.1, which made very small changes) SW: One monolithic vocabulary versus managed modularity? NM: [scribe missed] SW: Is it just us (the TAG) who are the problem, by saying "modularity is good", when the HTML community just aren't interested NM: I like the modular architecture, but how do we convince the HTML community? TBL: We've gone through this with RDFa, and ARIA, and soon SVG, and to some extents microformats ... and in many such cases the cost of non-modular integration has been high, in terms of having to do real work to tweak the vocabularies and delete one or two attributes and so on <DanC_lap> (hmmm... what is it we've done with RDFa? RDFa followed the land-grab pattern.) TBL: Document Facebook Markup Language <Zakim> DanC_lap, you wanted to get back to the discussion of distributed extensibility in hypertext ??: FBML use prefixes? HST: I think so <Zakim> ht, you wanted to a) mention browser numbers and b) ask about HTML WG process <DanC_lap> html wg decisions: HST: four browser development teams, but ... ... HTML 5 WG decision process? DC: We have made only a handful of decisions, and they have not been design decisions DO: Who are we targetting with HTML -- browsers/search engines ... To get something into the spec., you have to get a browser to implement it, I've been told DO:How is that consistent with the claim that designs should come to a WG for review in order to change the language HST: The browser numbers are in interesting thing -- I've heard people say they are fundamentally misleading, because Firefox and Webkit on the iPhone (and just maybe Opera) is where vocal members of the HTML community are focussed, as where things are going, if not what dominates today DO: I'm concerned that the editor has too much influence on what goes into the HTML 5 spec, and I'm not sure how the WG could make a decision against the editor's wishes HST: DC, how should we go about lobbying the HTML WG do go in a certain direction? DC: One way would be to convince the browser manufacturers, then the WG would probably follow their lead NM: HST, are you clear on what we should try to convince the HTML WG to do? I'm not sure I have that conviction that distributed extensibility is a good thing, but I can see their side as well ... I think TBL is on the right path, we have to identify the pain points HST: No, not completely, but I think we are en route to having a story, some combination of TBL, implicit (media-type) namespace bindings and Sam Ruby's proposal is the way to go SW: I think about this a bit of closed language vs. open language, where it's a heavyweight story to change the language, you have to reconvene the WG JR: I don't think they would see it this way <DanC_lap> Sam Ruby: HTML5 and Distributed Extensibility DC: Well, Ian Hickson said "we don't want people to mess with the language without talking with us" <DanC_lap> action-132? <trackbot-ng> ACTION-132 -- Tim Berners-Lee to draft to a stronger piece outlining when the ARIA approache would not be practical -- due 2008-05-01 -- OPEN <trackbot-ng> JR: I've heard something different, that HTML is naturally extensible because the browsers ignore what they don't understand <DanC_lap> TBL: I'm inclined to withdraw action-132 in favor of HT's work and my presentation Close ACTION-132 <trackbot-ng> ACTION-132 Draft to a stronger piece outlining when the ARIA approache would not be practical closed trackbot, status trackbot: status <Norm> trackbot-ng, status <DanC_lap> (tbl should work, per 0 <DanC_lap> ) <scribe> ACTION: Tim to add public prose around his slides at the AC meeting to make the case for extensiblity and flexible XML, due 29 May [recorded in] <trackbot-ng> Created ACTION-145 - Add public prose around his slides at the AC meeting to make the case for extensiblity and flexible XML, due 29 May [on Tim Berners-Lee - due 2008-05-26]. SW: We haven't discussed the Improved Namespace Support issue explicitly ... Should we keep this alive DO, HT: Yes <DanC_lap> action-107? <trackbot-ng> ACTION-107 -- Dan Connolly to review compatibility-strategies section 3 (soon) and 5 for May/Bristol -- due 2008-05-15 -- OPEN <trackbot-ng> DO: I got reviews from Ashok Close ACTION-108 <trackbot-ng> ACTION-108 Review compatibility-strategies section 2, 4 a week after DO signals review closed DO: The recent reviews were of the 2008-03-13 reviews, and I did a new draft on 2008-05-13 Close ACTION-140 <trackbot-ng> ACTION-140 Revise strategies part of XML Versioning finding by 13 May 2008 closed Close ACTION-107 <trackbot-ng> ACTION-107 Review compatibility-strategies section 3 (soon) and 5 for May/Bristol closed Close ACTION-110 <trackbot-ng> ACTION-110 Review compatibility-strategies section 3, 4, 5 closed <scribe> ACTION: Norman to review 2008-05-13 versioning draft [recorded in] <trackbot-ng> Created ACTION-146 - Review 2008-05-13 versioning draft [on Norman Walsh - due 2008-05-26]. <scribe> ACTION: Noah to review 2008-05-13 versioning draft [recorded in] <trackbot-ng> Created ACTION-147 - Review 2008-05-13 versioning draft [on Noah Mendelsohn - due 2008-05-26]. DO: Some key points have come up which we need to make decisions about JR: I read the terminology document on the plane, and had some suggestions ... The strategies doc't is the finding, and the terminology is there to support it, right? ... I think I can see some improvements to the terminology, in the area of formalizing it ... Should I send them to you? DO: Yes please NM: Note we did thrash through some of the terminology line-by-line, so we need to be careful not to undo hard-won consensus DO: My goal is publish the strategies doc't on its own AM: W/o the terminology doc't? DO: Yes NM: Some of us might then want to review the use of terminology in the strategies doc't that would have been hyperlinked DO: They are still hyperlinked NM: But we can't hyperlink to a doc't we haven't published. . . ... Maybe we only need a small number of terms to be clear about DO: We could pull those into the strategies doc't JR: Yes. We can't publish with reference to definitions which are wrong ... A separate terminology document shouldn't be needed. Merging them in makes sense SW: We were planning to publish the strategies doc't as a WG Note DO: How do we normally publish findings? SW: As such, or (once so far) as a WG Note ... The Process says nothing about Findings. . . NM: We do have a number of pretty-much abandoned not-actually-Findings, we should maybe clarify somewhere that they are not progressing (various): Note sounds good, once we agree we like it AM: One of the phrases that gets used is "text of a language" -- not defined in the strategies doc't, should be explained there. . . JR: How much work do you want to do to address fresh faces coming to this document? ... Some of these usages are unprecedented. . . TBL: We did try to ground this pretty carefully, but never really finished that job JR: I made some mappings from the terminology doc't to terminology I understand ... so, e.g. 'information' to 'meaning'; 'instance' to 'text of the language'; 'does not break'/'successfully processed' which could be formalized ... I think 'strictly' and 'fully' are backwards in the definition of compatibility ... Partial orders would be useful here, as per the use in denotational semantics DO: How? JR: By talking about a partial order of the amount of information conveyed in various cases NM: V1 of a language allows for some extensions, w/o specifying much beyond toleration ... V2 assigns some meaning to a new construct ... Partial order is between a V1-only-interpretation and a V2-interpretation of the same message JR: [Yes] TBL, JR, NM: Discussion of 'information', Shannon/Weaver sense, etc. HST: I argued against using 'meaning' because in the formalizations I'm used to, meaning is not the endpoint, it's a means to an end (interpretation, which we rejected because its ordinary language use is too off-base) DO: We could give glosses of the kind you've suggested? JR: The diagram seems a bit tangled -- I ended up with a decision tree or flow diagram for scenarios ... which I found more helpful ... There are some theorems here, right? ... We have a speaker, and a message, and a receiver, and the message they get, and if you keep the changes constrained, the receiver will (partially) understand <DanC_lap> (I got cwm to prove that theorem... or one case of it... I think...) JR: That's a theorem you should (be able) to prove ... But that theorem isn't in the document ... Maybe it should be, if it's useful TBL: One of the goals we had which that might help is proving that the hearer can't get something out that wasn't put in ... Our original goal was to be able to ground our understanding of e.g. the "ignore what you don't understand" strategy ... But our experience with the complexity of real versioning systems was that we didn't get much from the maths ... Attributes are and are not in namespaces HST: Yes, and there was the markup/content distinction, which we didn't capture, and formal languages don't have NM: You're certainly doing the right thing to raise your concerns, and yes, some of what you're pointing to is areas we've struggled with before. ... The partial order thing is however new, I think, and might give us some leverage we need ... Not sure however how work on the terminology doc't fits with our need to actually ship the strategies doc't AM: This work seems to be good about markup languages, and we should stick with those -- there's less use wrt other languages -- there's no equivalent of "must ignore" for programming languages NM: Yes and no. Many of the versioning changes we need and want to cover include just changes in content, and that's lost if we talk about just markup AM: But it talks about 'language', and that's misleading, because it doesn't apply TBL, NM, JR: Actually it does, or it should, although some specific conditions may be needed AM: OK, but areas where this can have a strong impact is what we should focus on <DanC_lap> (I think it would probably be helpful to emphasize markup languages a bit more) AM: There's less of a versioning problem with programming languages, because you can always get a new compiler TBL: But if the new compiler doesn't compile my old programs, I have a real problem DO: We don't go into detail on incompatible changes, which is what you're describing TBL: No, Ashok's example did depend on backwards-compatibility, but asserted that forwards compatibility didn't matter NW: Software is the same -- people don't go from Perl 5 to Perl 6 because all their code won't survive the change DO: I really want to focus on getting the strategy doc't out SW: My preference would be to pull forward the minimum necessary from the terminology doc't to make the strategy doc't self-contained NM: With minimum revision? SW: Yes NM: JR, your changes would require real work, yes? JR: I'm not sure -- the partial order stuff w/o changing terminology would be pretty simple NM: So what if we look at the strategies doc't, see where we want changes anyway, and give JR the opportunity at each juncture to offer suggestions DO: I'm frustrated -- doubtful that another 'short' iteration will do it ... I'm happy to pull definition clips from terminology to strategy <dorchard> I have cleverly pulled definition clips from terminology to strategy <dorchard> It's in <timbl> A Language consists of a set of text and the mapping between texts and information.] SW: Time to write down our plan of record ... I believe DO's preference is to push the strategies document through to stability and, we hope consensus DO: To do this, I will make sure there are no external terminology definitions referred to in the strategies doc't ... I've tried to implement that, and I got close over the break <DanC_lap> +1 to the plan of hoisting DO: But I'd like to discuss the plan of record before we go into details SW: Are we agreed on a self-contained document which carries its own terminology? TBL: Will we have a commitment to push changes back to the terminology doc't? SW: I didn't mean that -- open question whether we take the term. doc't further or not DO: If we make minor changes to the terminology pulled through to the strategies doc I can certainly push them back ... if we work extensively on the terminology doc't, it might be hard to push those up to the strategies doc't RESOLUTION: To aim first for a stand-alone strategies document containing its own terminology definitions <DanC_lap> +1 s/,any constraints on the information// Request editorial change in Definition of language: change "set of text" to "set of texts" Request editorial change in Definition of language: delete "constraints on information" Request editorial change in Definition of language: "the mapping" -> "a mapping" <DanC_lap> (er... did we go async/non-real-time?) DO: Where's the grammar? NM: independent, you could have several different grammars for the same language HST: We will need it for accept/define set NM: But we may not need that distinction <DanC_lap> (+1 defer discussion of accept/defined set) <DanC_lap> jar, can you write down another definition of compatible?) NM: [Works through BANANA example (Example 5) from] <DanC_lap> example 5 NM: I don't think the accept/define distinction gives us what we need here, because the HTML spec. tells us how to build a DOM for BANANA, so it's in the define set ... So I want to distinguish between the level of detail at which or extent to which versions provide interpretations for texts ... V1 may even warn you of forthcoming changes TBL: You've introduced a hard example, but only an incomplete proposal -- you need to make your proposal more concrete before we can assess it NM: But the current approach doesn't seem to help in this case at all TBL: The DOM nodes are a separate kind of language, not really the meaning/information content at all NM: This isn't an edge case, it's pretty common, by now, and we need to be able to talk about it, but, because the DOM isn't a text at all, we don't have any way to TBL: The DOM is not relevant to the versioning story, it's just the abstract syntax -- the text that goes over the wire is what's important, and the user experience NM: I want to talk about the language when it involves scripting in the DOM HST: We don't know how to do that, today DO: If you treat the DOM as the information, then yes the accept and defined set are the same JR: There are multiple languages and multiple interpreters, and compositions of interpreters SW: There's a proposal attributed to JR -- would it address the compatibility terminology definitions? JR: accept set is just the set named in the 1st definition AM: I think the defined set is a property of the language, but the accept set isn't JR: So let me start from the beginning: start with a family of languages, for which we have a set of texts (call it T) and a set of informations, call it I A language in a family defines a mapping from a subset of T (call it AS) to a subset of I 'bottom' is always in I DS(a language L) == {t in AS(L) | Defined(t)} TBL: What does Defined(t) mean? JR: I thought Defined(t) just meant l(t) > 'bottom' ... I thought HST said it meant l(t) is maximal ... L' >= L iff for all t l'(t) >= l(t) ... Note that this appeals to a partial order which I must have NM: What does 'maximal' mean JR: l(t) maximal means there does not exist t' such that l(t') >= l(t) [side conversation about whether we can/should assume compositionality] HST: I think your maximal is off-base TBL: Add DS a subset of AS and S a stripping function which is used to define l by a) defining l over DS and b) defining l over AS as l(S(t)) SW: JR, can you try to write this up, perhaps with help? ... But how does this help DO? DO: Well, we need a rigourous definition of compatibility so people know what it means to define extensible languages, which we currently do using accept and define sets NM: How about: "an extensible language is one in which multiple texts carry the same information" <DanC_lap> "An extensible language is one where not all the syntax is used up" <- a not-very-rigorous version of the defined/accept story NM: there are dumb ways that can be true (alternative amounts of whitespace), but good versions are clear <DanC_lap> perhaps with examples: "an extensible language is one where not all the syntax is used up; typically in a markup language, tags are named and not all the names are used up" NM: So the value of JR's story would be, if it pays off, that we can use it in explaining to users why building languages this way gives extensibility <timbl> An extensible language is one in which not all th esyntax is used up, like you only use " quotes and ? for variabls and you keep ' single quotes and dolla signs for later HST: I think JR's story, with a bit of where TBL was going, is able to formalize NM/DC's suggestion: a language is extensible iff it has headroom == for each information in I there are multiple distinct members of T which map to it NM: I still don't see how this gets us to the kind of changes that happen when you add CSS/Javascript/XSLT ... I'm not sure saying adding a CSS stylesheet to a page requires us to say its now using a different language [Diffuse strategic discussion of what to do next] SW: One possibility is to try to take JR's sketch and work it up to replace the definitions of (backward/forwards) compatible ... Another possibility is to correct the current (that is DO's edited pulled-through) terminology definitions so they work together ... Either way, once we got those definitions, DO has editorial work 'lower down', and then we're done? DO: The remainder of the document still needs to be reviewed and agreed on SW: JR, you willing to take this on? <timbl> I would point out that we have been using 'document' to mean Information resource' and that does not really match thi suse. JR: I'm not sure I know what Defined Text Set means, so I can't take the task on in those terms NM: I didn't mean you had to use DTS, if you don't need it to get the other terms defined DO: I understand DTS in terms of for example the name = first middle last + wildcard, but v1 defines the meaning of only first middle last SW: HST, can you help? HST: I would like to try <scribe> ACTION: Jonathan to see if he can develop a formal basis for the definition of extensibility, possibly includiing definitions of forwards/backwards compatibility [recorded in] <trackbot-ng> Created ACTION-148 - See if he can develop a formal basis for the definition of extensibility, possibly includiing definitions of forwards/backwards compatibility [on Jonathan Rees - due 2008-05-26]. <scribe> ACTION: Henry S to help Jonathan with ACTION-148 [recorded in] <trackbot-ng> Created ACTION-149 - S to help Jonathan with ACTION-148 [on Henry S. Thompson - due 2008-05-26]. <Stuart>
http://www.w3.org/2001/tag/2008/05/19-minutes
CC-MAIN-2022-40
refinedweb
5,800
65.86
In this tutorial we're going to collect analytics on a Redux-powered user form. You will learn: - How to measure user drop off in forms using Google Analytics. - How to create a destination funnel report in Google Analytics. - How to map Redux actions to Google Analytics events and page views. This tutorial assumes prior exposure to Git, JavaScript (ES2015), and Redux. The App We'll be collecting analytics on the payment form of a simple eCommerce app. To download the app, open a terminal and run the following command: git clone git@github.com:rangle/analytics-sample-apps.git Then navigate into the cloned directory and checkout v1.0.0: cd analytics-sample-apps git checkout tags/v1.0.0 Now, navigate to the shopping-cart directory and install the project's dependencies: cd shopping-cart npm install Once npm has finished installing the app's dependencies, start the app with the following command: npm start The app should open in your browser at the following address:. Take a minute or two to play around and explore the app: /shows a list of items to buy /cartshows items added to the cart /paymentshows a form for collecting payment details /order-completeshows a message indicating a successful order Our goal is to collect analytics on the payment form. Navigate to the /payment view and open your browser's JavaScript console. Refresh the page, type a character into each input field, then click the disabled Buy Now button. Your form and console should look something like this: When you land on the page Redux fires a ROUTE_CHANGED action, then an action for each form field change, and finally an action when a user attempts to buy something but fails to proceed because of invalid inputs. Update the form with valid inputs this time. None of the form fields should have a red outline and the Buy Now button should be enabled. Click the Buy Now button. Notice how a Redux action fired whenever an input field changed, and notice how there is one last ROUTE_CHANGED action when you successfully completed the form. Here's all you need to remember at this point: - Our goal is to collect analytics on the payment form - Redux actions fire when the route changes and when the user types something into the form fields Setting Up Google Analytics Now that we've had a look at the form let's see how we can set up a report in Google Analytics to show the percentage of users that saw the form, filled it in, and successfully completed an order. create a new web property. Make a note of your property's tracking Id. Follow the instructions here to create a new goal. - In Goal Setup, select Custom. - In Goal Description, enter in Payment Form Filledfor the goal name. - In Goal Description, select Destinationfor the goal type. - Lastly, fill in Goal Details to match the following image then click Save. Let's review. Our goal is to reach a destination, which is the /order-complete page. We set up a funnel report that shows the six steps we expect the user to take before reaching the /order-complete page. We first expect the user to land on the /payment page. Then we expect the user to fill in each input field. We also expect some users might attempt to submit the payment form with invalid inputs. Notice anything strange? Funnel reports in Google Analytics expect each step a user takes towards a goal is a whole new page. This is a remanent from the old days when single page apps weren't really a thing. Back then, whenever anything major changed in a website, there was a page load. Now, in modern apps like the one we're working on, we're using JavaScript to dynamically display different views, update the address bar, and manage the browser history. Our user might across different pages and fill in forms, but from the perspective of Google Analytics nothing is happening. We need a way to fake a page load when these things happen. Or more specifically, we need a way to map our app's Redux actions to Google Analytics page views. Based on the funnel report we set up, here's a map of Redux actions to the page loads we need to fake: Thankfully, there's an npm package to help us with this exact problem. Redux Beacon Redux Beacon is a framework agnostic library for mapping Redux actions to analytics events. In this next part, we'll look at how we can leverage its API to solve our form analytics problem. Let's start off by installing the library and saving it to our project's package.json file. Open a terminal, cd into the shopping-cart directory and run the following command: npm install redux-beacon@0.2.x --save Once that's done, follow the instructions here to add your Google Analytics tracking snippet to the shopping-cart/public/index.html file. This should be the same Google Analytics property we set up in the previous section. If the app is still running then saving the file should trigger a site rebuild. Otherwise, call npm start to get the site up and running again. At this point, you should see one active user in your Google Analytics Real-Time dashboard. Next, create a new file called analytics.js in shopping-cart/src and add the following code to it: // shopping-cart/src/analytics.js import { createMiddleware } from 'redux-beacon'; import { GoogleAnalytics } from 'redux-beacon/targets/google-analytics'; import { logger } from 'redux-beacon/extensions/logger'; const pageview = { eventFields: (action, prevState) => ({ hitType: 'pageview', page: prevState.route.length > 0 ? action.payload : undefined, }), eventSchema: { page: value => value !== undefined, }, }; const eventsMap = { ROUTE_CHANGED: pageview }; export const middleware = createMiddleware(eventsMap, GoogleAnalytics, { logger }); Let's go through this block by block. import { createMiddleware } from 'redux-beacon'; import { GoogleAnalytics } from 'redux-beacon/targets/google-analytics'; import { logger } from 'redux-beacon/extensions/logger'; Here, we're importing various functions provided by Redux Beacon. The second and third imports are relative imports. Redux Beacon was built this way to help minimize bundle sizes. const pageview = { eventFields: (action, prevState) => ({ hitType: 'pageview', page: prevState.route.length > 0 ? action.payload : undefined, }), eventSchema: { page: value => value !== undefined, }, }; This block is called an event definition. It's a plain old JavaScript object (POJO) with a special eventFields method. The object returned by eventFields is the analytics event that Redux Beacon will push to a target. There's another special property called eventSchema that contains a contract for the properties returned by eventFields. Here our eventSchema expects the event object to have a page key whose value is not undefined. If the eventFields method returns an event whose page is undefined then Redux Beacon won't push anything to Google Analytics. Take a closer look at the eventFields method. The page will only be undefined if the previous route's length is zero. And if you look at src/reducer.js the initial state for the route is an empty string whose length is zero. So with this event definition we are telling Redux Beacon to send a page hit to Google Analytics whenever the route changes except on the initial page load. Why wouldn't we want to send a page hit to Google Analytics on the initial page load? Because we're already doing that in the tracking snippet. The last line ga('send', 'pageview') hits Google Analytics with a page view that matches the initial route. If our event definition didn't include the eventSchema then we would be sending two page hits to Google Analytics instead of one when the app first loads. There's one other course we can take to prevent the initial page load from being recorded twice. You can just delete or comment out the ga('send', 'pageview') line from the tracking snippet. In which case the event definition would no longer need an eventSchema and the eventFields method would lose some fat: const pageview = { eventFields: action => ({ hitType: 'pageview', page: action.payload, }) }; const eventsMap = { ROUTE_CHANGED: pageview }; This block is called an event definitions map. This is where we link Redux actions to event definitions. In this case when the ROUTE_CHANGED action fires, Redux Beacon will call the pageview's eventFields method, pass it the action object, then push the resulting event object to Google Analytics. Let's have a look at an example: - The app dispatches the { type: 'ROUTE_CHANGED', payload: '/cart' }action - Redux Beacon calls pageview.eventFieldswith the action - The eventFieldsmethod returns { hitType: 'pageview' page: '/cart' } - Redux Beacon hits Google Analytics with a page view ( /cart) export const middleware = createMiddleware(eventsMap, GoogleAnalytics, { logger }); This last line creates the Redux Beacon middleware that we're going to apply to the app's Redux store. We're first passing in the event definitions map, followed by the target for the resulting events, and finishing off with a logging extension so we can see the analytics events that Redux Beacon generates. Now that we have Redux Beacon set up all we have to do is apply the middleware when creating the Redux store. Add the following line to the src/App.js file. // src/App.js (somewhere near the top of the file) import { middleware as analyticsMiddleware } from './analytics'; Then, scroll down to where createStore is called and apply the middleware. // src/App.js (should be around line 35) const store = createStore( reducer, applyMiddleware(createLogger(), analyticsMiddleware) ); Save the file, then mosey over to your browser and refresh the shopping cart app. Navigate from the root page to the cart page then to the payment page. Have a look at the console. Like before, you should see Redux actions for each route change, but now you should also see the associated analytics events above each Redux action. Go back to the Real-Time view in Google Analytics, this time select the Behaviour tab and click on Page views (last 30min). You should see /, /cart, and /payment listed along with the number of times each route was visited. With one simple event definition we managed to map every ROUTE_CHANGED action to a Google Analytics page hit. This includes the two page hits we need for our funnel report. To track the remaining page views needed for the funnel report we need to create an event definition for the each form field Redux action, and lastly an event definition for when users try to submit the form with invalid inputs. In src/analytics.js add the following function below the pageview event definition. function createPageview(route) { return { eventFields: () => ({ hitType: 'pageview', page: route, title: 'Payment Form Field', }), }; } This factory function returns an event definition for a Google Analytics page hit. We have added a title property to the event to help distinguish these page views from the rest. We'll see the usefulness of this later on. Next, update the eventsMap to include the form field Redux actions. const eventsMap = { ROUTE_CHANGED: pageview, NAME_ENTERED: createPageview('/name-entered'), EMAIL_ENTERED: createPageview('/email-entered'), PHONE_NUMBER_ENTERED: createPageview('/phone-number-entered'), CREDIT_CARD_NUMBER_ENTERED: createPageview('/cc-number-entered'), }; Here we're using the createPageview factory to create an event definition for each input field event. Go back to your browser, navigate to the app's /payment page and fill in the form. You should see analytics events being logged to the console whenever the form fields change. This is what we wanted right? Yes and no. We wanted to hit Google Analytics with a page view when a user enters their name, email, phone number, or credit card number. But with our current setup, we are hitting Google Analytics with page views each time the user adds a single character to each form field. So if the user enters John in the name field, Google Analytics records four hits to /name-entered instead of one. The eventSchema can help us with this problem. Update the createPageview factory in src/analytics.js as follows. function createPageview(fieldName, route) { return { eventFields: (action, prevState) => ({ hitType: 'pageview', page: prevState[fieldName].length === 0 ? route : '', title: 'Payment Form Field', }), eventSchema: { page: value => value === route, }, }; } We updated the event definition returned by createPageView with a new property: eventSchema. Here our eventSchema expects the event object to have a page key whose value is equal to the route. If the value is not equal to the route then Redux Beacon won't push anything to Google Analytics. Now look at the changes made to the eventFields method, you will notice a new conditional that ensures the page property will only match the route when the form field's value is first being filled. That is, if the previous state of the form field is empty (it's length equals zero) then the page property is set to the route, otherwise the page property is set to an empty string. Update the eventsMap to use the revised event definition factory. const eventsMap = { ROUTE_CHANGED: pageview, NAME_ENTERED: createPageview('name', '/name-entered'), EMAIL_ENTERED: createPageview('email', '/email-entered'), PHONE_NUMBER_ENTERED: createPageview('phoneNumber', '/phone-number-entered'), CREDIT_CARD_NUMBER_ENTERED: createPageview('ccNumber', '/cc-number-entered'), }; Save the file, head on over to your browser, refresh the app, and navigate to the payment form. Did it work? Before, Redux Beacon would hit Google Analytics with a pageview each time a form field's value changed. Now, Redux Beacon only sends one page hit per form field. Almost done! There's one more event we need to capture before we can call it a day. We are tracking each route change, we are tracking the number of times a user fills in their name, email, phone number, and credit card number. Now we need to track the number of times a user tries to submit the payment form with invalid inputs. Update the eventsMap with an event definition for the BUY_NOW_ATTEMPTED action. const eventsMap = { ROUTE_CHANGED: pageview, NAME_ENTERED: createPageview('name', '/name-entered'), EMAIL_ENTERED: createPageview('email', '/email-entered'), PHONE_NUMBER_ENTERED: createPageview('phoneNumber', '/phone-number-entered'), CREDIT_CARD_NUMBER_ENTERED: createPageview('ccNumber', '/cc-number-entered'), BUY_NOW_ATTEMPTED: { eventFields: () => ({ hitType: 'pageview', page: '/buy-now-attempt' }) } }; That's it! We mapped all the Redux actions required for the form's funnel report! Now, Google Analytics should receive all the data required to fill the Conversions> Goals > Funnel Visualization report. It's worth mentioning that the funnel visualization report is not real-time, so it might take a few hours before you start seeing any data. Until then, here's a teaser as to what it might eventually look like: A Word of Caution There's one thing to be aware of when tracking form completion in Google Analytics. The virtual page views we created for each form field have become a part of the total page view counts and user flows. This isn't usually what we want. One way around this problem is to create a new view for your Google Analytics property with the form field virtual page views filtered out. To do so, follow the instructions here to create a new view for your property. Once you have a new view, follow the instructions [here]( hl=en#create_a_filter_at_the_view_level) to create a new custom filter. When creating the custom filter set the filter type to Exclude, set the filter field to Page Title, set the filter pattern to Payment Form Field, then click Save. The reports in this view will contain every page view hit except those related to the payment form. Before You Go Here’s what we've achieved: - We learned how to create a destination funnel report in Google Analytics. - We learned that Google Analytics expects most site changes to be triggered by page loads. - We learned how to use Redux Beacon to map Redux actions to analytics events. - We learned how to validate analytics events before sending them to Google Analytics. - We learned that there are some caveats to using Google Analytics for tracking form completion.
https://rangle.io/blog/tracking-form-completion-in-google-analytics-with-redux/
CC-MAIN-2019-18
refinedweb
2,644
62.48
A sensible orm for PostgreSQL or SQLite Project Description An opinionated lightweight orm for PostgreSQL or SQLite. Prom has been used in both single threaded and multi-threaded environments, including environments using Greenthreads. 1 Minute Getting Started with SQLite First, install prom: $ pip install prom Set an environment variable: $ export PROM_DSN=prom.interface.sqlite.SQLite://:memory: Start python: $ python Create a prom Orm: >>> import prom >>> >>> class Foo(prom.Orm): ... table_name = "foo_table_name" ... bar = prom.Field(int) ... >>> Now go wild and create some Foo objects: >>> for x in range(10): ... f = Foo.create(bar=x) ... >>> Now query them: >>> f = Foo.query.first() >>> f.bar 0 >>> f.pk 1 >>> >>> for f in Foo.query.in_bar([2, 3, 4]): ... f.pk ... 3 4 5 >>> Update them: >>> for f in Foo.query: ... f.bar += 100 ... f.save() ... >>> and get rid of them: >>> for f in Foo.query: ... f.delete() ... >>> Congratulations, you have now created, retrieved, updated, and deleted from your database. Example – Create a User class Here is how you would define a new Orm class: # app.models (app/models.py) from prom import Orm, Field, Index class User(Orm): table_name = "user_table_name" username = Field(str, True, unique=True), # string field (required) with a unique index password = Field(str, True), # string field (required) email = Field(str), # string field (not required) index_email = Index('email') # set a normal index on email field You can specify the connection using a prom dsn url: <full.python.path.InterfaceClass>://<username>:<password>@<host>:<port>/<database>?<options=val&query=string>#<name> So to use the builtin Postgres interface on testdb database on host localhost with username testuser and password testpw: prom.interface.postgres.PostgreSQL://testuser:testpw@localhost/testdb To use our new User class: # testprom.py import prom from app.models import User prom.configure("prom.interface.postgres.PostgreSQL://testuser:testpw@localhost/testdb") # create a user u = User(username='foo', password='awesome_and_secure_pw_hash', email='foo@bar.com') u.save() # query for our new user u = User.query.is_username('foo').get_one() print u.username # foo # get the user again via the primary key: u2 = User.query.get_pk(u.pk) print u.username # foo # let's add a bunch more users: for x in range(10): username = "foo{}".format(x) ut = User(username=username, password="...", email="{}@bar.com".format(username)) ut.save() # now let's iterate through all our new users: for u in User.query.get(): print u.username Environment Configuration Prom can be automatically configured on import by setting the environment variable PROM_DSN: export PROM_DSN=prom.interface.postgres.PostgreSQL://testuser:testpw@localhost/testdb If you have multiple connections, you can actually set multiple environment variables: export PROM_DSN_1=prom.interface.postgres.PostgreSQL://testuser:testpw@localhost/testdb1#conn_1 export PROM_DSN_2=prom.interface.postgres.PostgreSQL:/ You can also extend the default prom.Query class and let your prom.Orm child know about it # app.models (app/models.py) class DemoQuery(prom.Query): def get_by_foo(self, *foos): """get all demos with matching foos, ordered by last updated first""" return self.in_foo(*foos).desc_updated().get() class DemoOrm(prom.Orm): query_class = DemoQuery DemoOrm.query.get_by_foo(1, 2, 3) # this now works Notice the query_class class property on the DemoOrm class. Now every instance of DemoOrm (or child that derives from it) will forever use DemoQuery. Using the Query class You should check the actual code for the query class in prom.query.Query for all the methods you can use to create your queries, Prom allows you to set up the query using psuedo method names in the form: command_fieldname(field_value) So, if you wanted to select on the foo fields, you could do: query.is_foo(5) or, if you have the name in the field as a string: command_field(fieldname, field_value) so, we could also select on foo this way: query.is_field('foo',: query.get(10, 1) # get 10 results for page 1 (offset 0) query.get(10, 2) # get 10 results for page 2 (offset 10) They can be chained together: # SELECT * from table_name WHERE foo=10 AND bar='value 2' ORDER BY che DESC LIMIT 5 query.is_foo(10).is_bar("value 2").desc_che().get(5) You can also write your own queries by hand: query.raw("SELECT * FROM table_name WHERE foo = %s", [foo_val]) The prom.Query has a couple helpful query methods to make grabbing rows easy: get – get(limit=None, page=None) – run the select query. get_one – get_one() – run the select query with a LIMIT 1. value – value() – similar to get_one() but only returns the selected field(s) values – values(limit=None, page=None) – return the selected fields as a tuple, not an Orm instance This is really handy for when you want to get all the ids as a list: # get all the bar ids we want bar_ids = Bar.query.select_pk().values() # now pull out the Foo instances that correspond to the Bar ids foos = Foo.query.is_bar_id(bar_ids).get() pk – pk() – return the selected primary key pks – pks(limit=None, page=None) – return the selected primary keys has – has() – return True if there is atleast one row in the db matching query get_pk – get_pk(pk) – run the select query with a WHERE _id = pk get_pks – get_pks([pk1, pk2,...]) – run the select query with WHERE _id IN (...) raw – raw(query_str, *query_args, **query_options) – run a raw query all – all() – return an iterator that can move through every row in the db matching query count – count() – return an integer of how many rows match the query NOTE, Doing custom queries using raw would be the only way to do join queries. Specialty Queries If you have a date or datetime field, you can pass kwargs to fine tune date queries: import datetime class Foo(prom.Orm): table_name = "foo_table" dt = prom.Field(datetime.datetime) index_dt = prom.Index('dt') # get all the foos that have the 7th of every month r = q.is_dt(day=7).all() # SELECT * FROM foo_table WHERE EXTRACT(DAY FROM dt) = 7 # get all the foos in 2013 r = q.is_dt(year=2013).all() Hopefully you get the idea from the above code. The Iterator class the get and all query methods return a prom.query.Iterator instance. This instance has a useful attribute has_more that will be true if there are more rows in the db that match the query. Similar to the Query class, you can customize the Iterator class by setting the iterator_class class variable: class DemoIterator(prom.Iterator): pass class DemoOrm(prom.Orm): iterator_class = DemoIterator Multiple db interfaces or connections It’s easy to have one set of prom.Orm children use one connection and another set use a different connection, the fragment part of a Prom dsn url sets the name: import prom prom.configure("Interface://testuser:testpw@localhost/testdb#connection_1") prom.configure("Interface://testuser:testpw@localhost/testdb#connection_2") class Orm1(prom.Orm): connection_name = "connection_1" class Orm2(prom.Orm): connection_name = "connection_2" Now, any class that extends Orm1 will use connection_1 and any orm that extends Orm2 will use connection_2. Schema class The Field class You can create fields in your schema using the Field class, the field has a signature like this: Field(field_type, field_required, **field_options) The field_type is the python type (eg, str or int or datetime) you want the field to be. The field_required is a boolean, it is true if the field needs to have a value, false if it doesn’t need to be in the db. The field_options are any other settings for the fields, some possible values: -: from prom import Orm, Field class Orm1(Orm): table_name = "table_1" foo = Field(int) class Orm2(Orm): table_name = "table_2" orm1_id=prom.Field(Orm1, True) # strong reference orm1_id_2=prom.Field(Orm1, False) # weak reference Passing in an Orm class as the type of the field will create a foreign key reference to that Orm. If the field is required, then it will be a strong reference that deletes the row from Orm2 if the row from s1 is deleted, if the field is not required, then it is a weak reference, which will set the column to NULL in the db if the row from Orm1 is deleted. Versions While Prom will most likely work on other versions, these are the versions we are running it on (just for references): Python $ python --version Python 2.7.3 Postgres $ psql --version psql (PostgreSQL) 9.3.6 Installation Postgres If you want to use Prom with Postgres, you need psycopg2: $ apt-get install libpq-dev python-dev $ pip install psycopg Green threads If you want to use Prom with gevent, you’ll need gevent and psycogreen: $ pip install gevent $ pip install psycogreen These are the versions we’re using: $ pip install "gevent==1.0.1" $ pip install "psycogreen==1.0" Then you can setup Prom like this: import gevent.monkey gevent.monkey.patch_all() import prom.gevent prom.gevent.patch_all() Now you can use Prom in the same way you always have. If you would like to configure the threads and stuff, you can pass in some configuration options using the dsn, the three parameters are async, pool_maxconn, pool_minconn, and pool_class. The only one you’ll really care about is pool_maxconn which sets how many connections should be created. All the options will be automatically set when prom.gevent.patch_all() is called. Prom Prom installs using pip: $ pip install prom Using for the first time Prom takes the approach that you don’t want to be hassled with table installation while developing, so when it tries to do something and sees that the table doesn’t yet exist, it will use your defined fields for your prom.Orm child and create a table for you, that way you don’t have to remember to run a script or craft some custom db query to add your tables, Prom takes care of that for you automatically. Likewise, if you add a field (and it’s not required) then prom will go ahead and add that field to your table so you don’t have to bother with crafting ALTER queries while developing. If you want to install the tables manually, you can create a script or something and use the install() method: SomeOrm.install() Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/prom/
CC-MAIN-2018-17
refinedweb
1,705
55.44
tag:blogger.com,1999:blog-131939972008-05-12T09:35:11.310-04:00J&O Fabrics Store Newsletterjandofabrics is an XML content feed. It is intended to be viewed in a newsreader or syndicated to another site, subject to copyright and fair use.tag:blogger.com,1999:blog-13193997.post-56796831268814587592008-05-12T09:15:00.000-04:002008-05-12T09:35:11.617-04:00J&O Welcomes Back NASCAR !<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><span style="font-family:trebuchet ms;">You’ve waited and prayed long enough, and slowly but surely your prayers are being answered. The endless calls you’ve had to make to manufacturers, distributors and retail stores, looking for your favorite <strong><em><a href="">NASCAR</a></em></strong> drivers and <strong><em><a href="">NASCAR</a></em></strong> racing materials are coming to an end. There is a light at the end of your dark tunnel, and it is the light of <strong>J&O</strong>.</span><br /><br /><br /><div><br /><div><span style="font-family:trebuchet ms;"><br /><div>The National Association for Stock Car Auto Racing (NASCAR) is the largest sanctioning body of motorsports in the United States. The organization sanctions over1,500 races at over 100 tracks in 39 states, Canada and Mexico. With its roots grounded in the entertainment sector of southern USA, <strong><em><a href="">NASCAR</a></em></strong> has grown to become the second most popular professional sports in terms of television rating inside the U.S. With over 75 million fans worldwide, the sport of car racing has become a very popular and extremely lucrative business. Over $3 billion dollars in annual licensed products have been purchased by the ever growing viewers and as a result, Fortune 500 companies sponsor <strong><em><a href="">NASCAR</a> </em></strong>more than any other governing body. With these stats and facts out of the way, lets take a moment to give recognition to some of the great car riders both past & present that are in hot demand today. If you were brought up south of the Mason Dixon border, then you are a fan of <em><strong><a href="">NASCAR</a></strong></em> racing and know these speed demons whether you like it or not. If not, here are some names you won’t soon forget.</div><br /><br /><div><strong>The Late Dale Earnhardt</strong> – 2000 Winston 500 Champion, four-time IROC Champion, 2002 Motorsports Hall of Fame Inductee, 2006 International Motorsports Hall of Fame Inductee, Sprint All-Star Race III, VI and IX Winner, <a href=""><em><strong>NASCAR</strong></em>’</a>s 50 Greatest Drivers Rank #2</div><br /><br /><div><strong>Dale Earnhardt, Jr</strong>.<em>– NASCAR </em>Sprint Cup / 2006 Crown Royal 400 Champion and driver of #88 Mountain Dew AMP/ National Guard Chevy Impala SS Car.</div><br /><br /><div><strong>Jimmy Johnson</strong>- Current defending NASCAR Sprint Cup Champion and driver of #48 Lowes Chevrolet Impala SS Car.</div><br /><br /><div><strong>Tony Stewart</strong>-<strong> </strong><em>NASCAR</em> Winston Cup, Nextel & Indy Cars championships and driver of the #20 Toyota Camry & #20 Old Spice cars.</div><br /><br /><div><strong>Jeff Gordon</strong>- Four-time<strong> </strong><em>NASCAR</em> Winston Cup Series Champion, three-time Daytona 500 winner, and driver of #24 Chevrolet Impala.</div><br /><br /><div><em>[ <strong>NASCAR</strong> Races</em> ]</div><br /><div>Sprint Cup, Nationwide Series, Craftsman Truck Series, <em>NASCAR</em> Canadian Tire Series,<em> NASCAR</em> Corona Series, Regional Racing Series.</div><br /><br /><div>For all our <em><strong><a href="">NASCAR</a></strong></em> fans young and old, we thank you for your patience, your calls and most of all, keeping the fabrics commemorating this great American sport in demand. Your voice is being heard. Keep checking in at our site for the latest new prints as they roll off of the press, out of the mills, onto our site, and into your hands.</div><br /><br /><div></div>" /><br /><br /><div><em>Check out our selection of <a href="">NASCAR fabric </a>here!</em></div><br /><div><em>Check out our selection of <a href="">flames fabric </a>here!</em></div><br /><div><em>Check out our selection of <a href="">racing fabric </a>here!</em></div><br /><div><em>Check out our selection of <a href="">transportation fabric </a>here!</em></div><br /><em><br /><br /><div><br /></em></span>Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div>jandofabrics Do Happen: J&O and ConKerr Cancer<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="FLOAT: left; MARGIN: 0pt 10px 10px 0pt; CURSOR: pointer" alt="" src="" border="0" /></a><span style="FONT-WEIGHT: bold;font-family:';" >When was the last time your local fabric store donated any fabric to you? After practiced begging and pleading in front of your bathroom mirror for that perfect line and doe eyed look, most outlets would soon put a price tag on a seemingly <span style="font-size:+0;"></span>useless ¼ yard scrap than give them away to a striving designer or seamstress for free. But low and behold, miracles do happen. At least for some of us.<?xml:namespace prefix = o /><o:p></o:p></span> <p class="MsoNormal" style="FONT-WEIGHT: bold"><span style="font-family:';">Recently, one of South Jersey’s. <span style="font-size:+0;"></span>From Sponge Bob to Scooby Doo, from race cars to kitty cat prints, there is a pillow case to tickle every child’s fancy both young and old. Even teens have their selection of pin up girls, retro designs and popular skull prints to choose from to make their stay a little more pleasant. <o:p></o:p></span></p><p class="MsoNormal"><span style="font-family:';"><span style="FONT-WEIGHT: bold" <span style="FONT-STYLE: italic">CN8 Wednesday at 7:30pm EST.</span><br /></span></span></p><p class="MsoNormal"><br /><span style="font-family:';"><span style="FONT-WEIGHT: bold"></span><o:p></o:p></span></p><p class="MsoNormal" style="FONT-WEIGHT: bold; FONT-STYLE: italic"><span style="font-family:';">To find out more about <a href="">ConKerr Cancer</a> click here!<o:p></o:p></span></p><p class="MsoNormal" style="FONT-WEIGHT: bold; FONT-STYLE: italic"><span style="font-family:';">To find out more about <a href="">J&O Fabrics</a><a href=""> </a>click here!<o:p></o:p></span></p><p class="MsoNormal"><span style="font-family:';"><span style="FONT-WEIGHT: bold; FONT-STYLE: italic">To see our selection of <a href="">novelty cotton </a>click here!</span><o:p></o:p></span></p><br /><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a>jandofabrics Your Home Furnishings with Fabric from J & O Fabric Store<p align="justify"><span style="font-family:verdana;">How would you honestly describe your home decor? Bold and inventive or drab and staid. If you chose the latter, then your decor needs an overhaul. Where can you find the fabric you need to enhance your home? Look no further than J&O Fabrics!</span></p><br /><p align="justify"><span style="font-family:verdana;">Maybe all you need to improve your furnishings are some new curtains or other draperies. Following are four bold, eye-catching decorator fabrics that will add plenty of pizzazz to your home. First off is <i><a href="">Flower Power Brown</a></i> (</span><a href=""><span style="font-family:verdana;">ato00051</span></a><span style="font-family:verdana;">). This chic designer fabric will lend attitude and style to anything it graces. Curtains, valances, decorative pillows and more can be made with <i>Flower Power</i>. The same can be said for <i><a href="">50's Mambo Bark Cloth Green</a></i> (</span><a href=""><span style="font-family:verdana;">bar00008</span></a><span style="font-family:verdana;">), with its trippy, 1950's Havana vibe.</span><a href=""><br /><img height="88" alt="Flower Power Brown" src="" width="72" border="0" /></a> <a href=""><img height="88" alt="50s Mambo Bark Cloth Green" src="" width="100" border="0" /></a></p><br /><p align="justify"><i><a href=""><span style="font-family:verdana;">Shapes Decorative Red</span></a></i><span style="font-family:verdana;"> (</span><a href=""><span style="font-family:verdana;">ato00092</span></a><span style="font-family:verdana;">) and<i> <a href="">Pop Square Retro</a></i> (</span><a href=""><span style="font-family:verdana;">ato00059</span></a><span style="font-family:verdana;">)<i> </i>and can give any room a retro look. These decorative fabrics might even remind you of all those corny 1950's sci-fi movies.</span><a href=""><br /><img height="88" alt="Shapes Decorative fabric Red" src="" width="90" border="0" /></a> <a href=""><img height="89" alt="Pop Square Retro Fabric" src="" width="95" border="0" /></a><br /><span style="font-family:verdana;">If your needs tend toward upholstery, then the next decorative fabrics will help. <i><a href="">Ambient Chenille</a></i> (</span><a href=""><span style="font-family:verdana;">che00059</span></a><span style="font-family:verdana;">) is a plush upholstery fabric with a deep black background upon which rest flowers and stems of emerald green, ruby red and copper. <i><a href="">Dogs Playing Poker Tapestry</a></i> (</span><a href=""><span style="font-family:verdana;">tap00006</span></a><span style="font-family:verdana;">) is a droll fabric that is the perfect companion for family rooms, drawing rooms and libraries. <i><a href="">Beltrame Upholstery</a></i> (</span><a href=""><span style="font-family:verdana;">cou00007</span></a><span style="font-family:verdana;">) is a fantastic chenille upholstery fabric. The retro design and colors combine to form a striking print.</span><a href=""><br /><img height="88" alt="Ambient Chenille Fabric" src="" width="73" border="0" /></a> <a href=""><img height="87" alt="Dogs Playing Poker Tapestry" src="" width="96" border="0" /></a> <a href=""><img height="87" alt="Beltrame Upholstery" src="" width="81" border="0" /></a><span style="font-family:verdana;"></span></p><br /><p align="justify"><span style="font-family:verdana;">After you use these discount decorator fabrics you will undoubtedly have many new adjectives with which to describe your home decor: bold, inventive, fabulous and fun. All of this is possible with a little help from J&O Fabrics!*</span></p><br /><p align="justify"><span style="font-family:verdana;font-size:85%;">*If your inclination is towards quilting instead of upholstering or sewing, then please investigate J&O's vast array of discount quilting fabrics! These fabrics and many more can be seen by browsing the fifty-plus categories of cotton <a href="">novelty fabrics</a> listed on our <a href="">online fabric store</a>.</span></p><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a>jandofabrics Into 2008 Fabric Fashions with J&O Fabric<span style="font-family:trebuchet ms;"><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a></span> <span style="font-family:trebuchet ms;">With the cold days of winter behind us and the new life that the upcoming season offers popping up all around us, it’s time to take a serious look at the dark and heavy wardrobe still hanging in our closets and on the walls and furniture of our homes, and give them all a fresh, light look for Spring 08’.</span><br /><br /><span style="font-family:trebuchet ms;">But what color palate should we be working with? What is in and what is out this season? Well, according to the Pantone Color Report, the Spring 08’ palette is, “ defined by classic, versatile neutrals punctuated by splashes of invigorating brights, empowering consumers to explore new and creative ways to combine colors. With vibrant and uplifting shades and flowery deep pink undertones, the Spring 08’ color palate perfectly reflects the cheerfulness of the season. “</span><br /><br /><span style="font-family:trebuchet ms;">Pantone Inc. is one of the most influential authorities on color in the fashion world. The foremost king of color, Pantone is a market leader in providing insider insights into what will hit our designer boutiques, urban markets and most of all, what fabrics and colors will be popular in our stores each season.</span><br /><br /><span style="font-family:trebuchet ms;">One of the keynote colors for spring 2008 is silver gray. Silver gray is subtle, versatile, flattering and elegant. This keynote color replaces the harsh black of winter and balances well with the soft, floral palate of spring. A list of this seasons exciting hues are as follows:</span><br /><br /><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><span style="font-family:trebuchet ms;"><em><strong></strong></em></span><br /><br /><span style="font-family:trebuchet ms;"><em><strong>Red/Purple & Pink Family<br /></strong></em>Spring Crocus<br />Pink Mist<br />Cantaloupe<br />Rococco Red<br /><em><strong>Green & Blue Family</strong></em><br />Daiquiri Green<br />Snorkel Blue<br /><em><strong>Yellow & Brown Family</strong></em><br />Freesia<br />Golden Olive</span><br /><br /><br /><span style="font-family:trebuchet ms;"></span><br /><span style="font-family:trebuchet ms;">Along with the rainbow of colors and large graphic floral prints that spring 2008's color palate is based on, this season also offers in an array of looks that will fit each of your different personalities. From transparency to tuxedos, togas to disco and airy frocks to safari wear, spring fashions in the trends from all over the map and creates a complete look that is uniquely you.</span><br /><br /><span style="font-family:trebuchet ms;">From head to toe, from ceiling to floor, Spring 08’ ushers in the creative designer in all of us.<br />For the fashionistas both young and old, the trends of the season give the woman in us the opportunity to mix feminine frocks, fluid fabrics and delicate sequins with gender bending borrowed-from-your-man-or-brothers’-closet suits, trousers and tailored vests. Greco-Roman Aphrodite inspired looks combine with the razzle-dazzle of metallic’s, while global fusion juxtaposes earthly neutrals with exotic ethnic influences for a look of house and home that screams Safari Diva into the sultry days of summer. With such a fabulous color and fashion palate to choose from this season, the inspiration needed to spark your inner creativity is endless. </span><br /><br /><span style="font-family:trebuchet ms;">So go on and ignite the Earthly Fashion Goddess in you and take a moment to visit us at J&O for all the latest color palates as we welcome in Spring 08’ together</span>.<br /><br /><em>Check out our selection of </em><a href=""><em>new spring 08'fabric </em></a><em>here!</em><br /><em>Check out our </em><a href=""><em>upholstery spring 08' fabric </em></a><em>selection here!</em><br /><em>Check out our </em><a href=""><em>spring 08'dress fabric </em></a><em>here!</em><br /><em>Check out our <a href="">spring 08' decorative fabric </a>here!<br /><br /></em><em></em><br /><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a>jandofabrics the Nose Knows.<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><br /><div>According to Dr. David Stewart (aroma therapist, author and lecturer), smell is the only sense directly connected to the central brain, rather than the frontal lobes. Because that part of the brain deals with nonverbal and emotional functions, he says, "our first response to anything we smell can be emotional rather than rational." In other words, scents…including oils, can also bring about emotional healing and increase spiritual awareness in ways we can't fully understand.<br /><br />This theory explains why certain scents bring on thoughts of nostalgia that can transport us back to our childhood or a specific time and place in our lives. These momentary recollections often times come complete with not only the smell of the event, but with the depth of emotion as well. Some good, some bad. Like the scent of your grandmother’s favorite perfume, your child’s hair, or the whiff of fresh sap from a maple tree on a warm autumn day. These are sometimes all we need to offset the stress and strain of our 9-5, and bring us to a place just outside the present…..if only for a moment.<br /><br />The aromas we experience in our lives stir our desires, activate our salivary glands, and soothe our aching hearts. But what if we could bottle up those scents to use whenever and wherever we desired. What if the sweet vanilla cookies your mom used to make to cheer you up and remind you that you were loved even when the rest of the world seemed to let you down could be at your disposal each time you needed that extra nurturing? Or that sensuous cologne, you know, the one that your husband wears every day just because he knows you like it? What if you could get the same scent, without all the chemical base products, to spray on your sheets each night he is out of town on business to make your dreamtime just a little sweeter?<br /><br />Scents carry not only thoughts of nostalgia in each drop, but can even heal and uplift an aching spirit and weary body. The most potent and natural of these scents comes in the form of organic essential oils. Organic essential oils are plant oils extracted by distillation. Though the principal uses of essential oils serve as flavoring agents and as therapeutic remedies in aromatherapy applications, they can also be used to sweeten your laundry, brighten your clothing, balance energies within and around, and keep unwanted pests away. Specific essential oils have therapeutic qualities that when applied to clothing or bedding, can offer you additional protection against negative energies and toxic emotions. While many are found in commercial perfumes and scented air fresheners, the organic base of these wonderful extracts from nature are best applied and utilized in their natural oil state.<br /><br />Below is a list of some of the more popular organic essential oils, their properties and their usage. When using essential oils (homemade or store bought) please adhere to the following guidelines to reap the full benefits of the oils. The fabrics in your home and in your closet will thank you for it.<br /><br /><strong>Safety Guidelines</strong><br /><br />1. Always use pure oils with no alcohol or chemicals added. Read labels carefully, and look for the words ‘essential oil’. Many oils on the market today are highly diluted. Some contain alcohol or other chemicals which kill the essence of the plant.<br /><br />2. Remember that these are scent oils. They are used by inhaling the fragrance. It is not wise to ingest any plant oil regardless of how diluted they may be. Honor the plants, the Creator and your body and use aromatherapy oils as they were intended.<br /><br />3. Because pure oils are so strong, you must take care to avoid contact with your eyes or other mucous membranes. Always dilute a pure plant oil with olive or other pure carrier oil before applying it to your skin as it may burn. Also be aware that many plant oils can stain fabric if not properly diluted. So use caution and common sense when handling essential oils.<br /><br />4. Remember that animals have much more sensitive olfactory capabilities than humans. Also, they tend to ingest any substance that touches their bodies. Do not spray room misters near your furry, feathered or finny friends.<br /><br />5. Wash your hands after handling oils.<br /><br />6. Keep oils away from direct heat or flame.<br /><br />7. NEVER ever mix oils with rubbing alcohol or other chemicals.<br /><br />8. Use only pure plant oils.<br /><br />9. Essential oils should never be used at full strength on your skin.<br /><br />10. Always use pure oils with no additives. Many oils on the market today are highly diluted. Some contain alcohol or other chemicals which may harm the essence of the plant.<br /><br /><strong>Application Guidelines</strong><br /><br /><em>Spray<br /></em><br />Fill a plastic spray bottle with pure (not tap) water. Add a few drops of the essential oil of your choice. Shake vigorously and before each use. Spray on upholstery, window treatments, car interiors, bedding lingerie drawer sachets or clothing, for that personal touch. Custom made quilts for friends and loved ones are made a little more special when hinted with a tranquil chamomile or other selected scent of choice. Can also be sprayed on curtains by an open window to energetically cleanse and bless a room.<br /><br /><em>Cotton Ball</em><br /><br />Scent a cotton ball with crushed and dried jasmine, cinnamon or ylang ylang essential oils. Place in the corners of your fabric piles , linen closets or wherever you keep your fabric craft projects for at least 24 hours. This will leave a sweet and harmonious energy on the fabric for you and your friends, clients or loved ones to enjoy.<br /><br /><em>Laundering</em><br /><br />Add a few drops of rosewood, peppermint or patchouli essential oils to the last phase of your rinse cycle to sweeten your laundry. Add a little lemon oil to freshen the load.<br /><br />Add a little lavender or cinnamon to your unscented dryer sheets for a softer, more aromatic load with a hint of calming residue on your garments, bedding or other fabricated item.<br /><br /><div><br /><br /><br /><div></div><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><em>Essential Oils</em><br /><br /><em>Frankincense or Bergamot</em>- Frankincense has been used for hundreds of years in many different religious traditions. It protects against negative energies and helps to cleanse the mind of depression and heavy emotions. Bergamot relieves anger and anxiety, and assists in elevating our spirits.<br /><br /><em>Honeysuckle</em> - A sweet, lighthearted scent credited with expanding joy. Good to use if you you're feeling grumpy, old or weary.<br /><br /><em>Chamomile</em>- A soothing and relaxing oil good for calming energies within and around.<br /><br /><em>Eucalyptus</em>- Historically used as a germicide, this oil is also commonly used for medicinal purposes, as it works great to alleviate a nagging cough. Spray on bedding pillow for a more restful night’s sleep.<br /><br /><em>Gardenia</em>- A feminine scent with a hint of mystery, said to enhance a woman’s innate kindness.<br /><br /><em>Jasmine</em>- Uplifting scent that for some is sexually stimulating. Spray on lingerie or bedding for enhanced energies.<br /><br /><em>Lavender</em>- Calms, soothes, settles the nerves, flea repellent. Great essential oil to place on dryer sheets when laundering bedding for your frisky pet, colicky infant or spa upholstery.<br /><br /><em>Lemongrass , camphor & citronella</em>– Spray these oils on your outdoor fabrics for additional protection against summer pests. All three have insect repellent properties.<br /><br /><em>Orange</em>- Instead of the little artificially scented tree that loses its scent the moment you roll down the window, spray a little orange essential oil dilution on the vinyl seats of your 65 Mustang for a more natural and long lasting air freshener.<br /><br /><em>Vetivert</em>-Relaxing and helps with insomnia. This essential oil carries a very unique scent which some people find unpleasant. Place a few drops in a water diluted spray bottle for a light spritz on your bedding before bedtime.<br /><br /><em>Oregano</em> -Useful as a fungicide and assists in balancing out personal insecurities. Place a few drops of this oil on a cotton ball or two and place in the same storage area as your fabrics for a clean, fungus free environment.<br /><br /><em>Patchouli , Pine or Rosemary</em> – Helps fight against fatigue. Lightly spray on clothing when your energy is depleted.<br /><br /><em>Sage</em>– A protective essential oil. Good for cleansing the mind, body, spirit & home of negative or heavy energies. Spray around your house, dap on your clothing, spritz gently on upholstery.<br /><br /><em>Marjoram or Ylang Ylang</em>- Good essential oils for alleviating the discomforts associated with PMS. Spray lightly on garments, bedding , and on fabric wrapped heating packs when required.<br /><br />So the next time you pick up your <a href="">Amy Butler </a>cotton, <a href="">Satin L’more</a>, or <a href="">Retro upholstery </a>to make that one of a kind quilt for your favorite grandmother, birthday lingerie for yourself, or cozy love seat for you & your wife’s 50th wedding anniversary, take a moment to baptize your craft with the perfect organic essential oil to make it complete….naturally. With a little help from J&O, you can turn your creations into truly unique reflections of the senses.<br /><br /><br /><em><strong>Check out our </strong></em><a href=""><em><strong>NEW fabric </strong></em></a><em><strong>section here!<br />Check out our complete </strong></em><a href=""><em><strong>fabric</strong></em></a><em><strong> section here!<br />Check out our collection of </strong></em><a href=""><em><strong>quilting fabric </strong></em></a><em><strong>here!</strong> </em><br /><br /><br /><div><em><strong>Check out our </strong><a href=""><strong>upholstery fabric </strong></a><strong>here!</strong></em></div><br /><br /><div><em><strong>Check out our </strong><a href=""><strong>decorative fabric </strong></a><strong>for bedding & home accent pieces here!<br /></strong></em><br /></div><br /><br /><br /><div></div><br /><br /><br /><div><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div>jandofabrics Bikini Contest<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><br /><div>Last Friday after work, my fellow co-workers and I went out for ladies night at a local hang out where we were guaranteed to have some good food, good laughs and a good time on the dance floor. While we were sitting in the lounge checking out the parade of young adult and middle aged fashionistas, an advertisement on the club’s monitor caught my eye. It read:</div><br /><div align="center"><br />“<span ><strong>Best homemade bikini contest next week</strong>!”</span></div><br /><div><br />A picture of a model-like playboy bunny posed scantily in a hot <a href="">white patent leather </a>bikini trimmed with pink feathers was all that it would take to spark a curious excitement in some, and plant a seed of creative ideas in others. Without knowing all the particulars of the contest, one thing I did know for sure was that <strong>J&O</strong> was definitely the place the soon to be contestants needed to come for all their bikini crafting needs.</div><br /><br /><div>From low hair faux furs, lame' and pima dotted cottons, to cute Betty Boop, organic hemp and crushed velvets, the ideas that could get you first place are endless. A short trip to your <strong>J&O</strong> fabric scrap pile and trim bag is all that you will need to make your homemade bikini complete. Use all your fabrics up and nothing to spare? Have no fear. With fabulously new fabrics coming in to <strong>J&O</strong> every week, you can design and create a swimsuit masterpiece that will go down in history as the hottest homemade craft this side of the Delaware.</div><br /><br /><div>I can see it now, young girls’ repping their college team colors and mascots’ in cute two pieces , diva’s in glittery sequence with beaded ties, and Go-Green advocates in natural organics trimmed with fauna leaves and recycled fasteners. If I shed another 20 lbs and freshen up my seamstress skills, I may have a chance for fortune and fame with all the left over Christmas tree decorations in my overstuffed closet. I’d fashion assorted ornaments, strings of popcorn and a few red bows to my green sequence bikini fabric and call it <em>Christmas in April</em>. I’d even place a star upon my head and be the light and life of the party as I receive my award for most festive!</div><br /><br /><div>For those of you living in the surrounding area, it’s not too late! If you got the time, we got the place. Stop in and pick up your homemade contest fabric and trims today. With a little money and creativity, your little hand crafted polka dot bikini could be your next claim to fame.</div><br /><br /><br /><div><em>Check out our hot <a href="">new fabric </a>selection here!</em></div><br /><div><em>Check out our array of <a href="">cotton novelty fabric </a>here!</em></div><br /><div><em>Check out our fabulous <a href="">dress fabric</a> section here!</em></div><br /><div><em>Check out our festive <a href="">holiday fabric </a>section here!</em></div><br /><br /><div><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a></div>jandofabrics Yo Yo's! Great Ideas for Remnant Fabrics<div><br /><br /><br /><div><span style="font-size:180%;">Yo Yos</span><br /><br /></div><a href=""><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: pointer; TEXT-ALIGN: center" alt="" src="" border="0" /></a><br />With.<br /><br /!<br /><span style="font-size:130%;"></span></div><div><span style="font-size:130%;">How to sew a basic Yo-Yo:</span><br /><br />Step 1:<br /><br />During this stage, you are going to be gathering up the materials needed in preparation for the yo-yo construction. You will need the following:<br /><br />*Fabric - old clothing, scraps around the house, anything you can think of but it must be of medium to light weight so that the yo-yo will gather and layout smoothly<br /><br /><br /><br />*Marking Tools - quilting pencils, or tailoring chalk<br /><br />*Well Sharpened Scissors<br /><br />*Hand Sewing Needles<br /><br />*Quality Thread<br /><br />Step 2:<br /><br /).<br /><br />Step 3:<br /><br /.<br /><br />Step 4:<br /><br /.<br /><br /.<br /><br /><br /><br /><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><br /><span style="font-size:130%;">How to Sew Yo-Yos Together:</span><br /><br />Step 1:<br /><br /!<br /><br />Step 2:<br /><br /.<br /><br />Step 3:<br /><br />To attach the rest of the yo-yos, repeat step 2 by successively adding on more yo-yos until you have completed your row or pattern.<br /><br />Happy Sewing!<br /><br /><br />Fabric supply link:<br /><br />J & O Fabric Center - <a href="">jandofabrics.com </a><br /><br /><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a></div>jandofabrics's Final Four! College Fabric For All!<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><br /><br /><br />Four schools, one national championship….the NCAA’s Men’s Final Four is upon us as this year’s March Madness continues underway in a frenzy that’s sweeping the nation for yet another season!<br /><br /><br /><div><div><div><br /></div><div><div><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" />The vertically UNchallenged, quick footed, fast handed, 3-point throwin’, movin-and-a-shakin’, boppin-and-a-dunkin’, flyin’-through-the-air-like-Mike men of college b-ball have rounded up their teams from the four regions of the country for a month long trek across rough terrain and jungle-like environments in a journey that would lead only the strongest of the strongest to the oasis of the Alamodome. In the heart of Texas these ambitious athletes fight through sore muscles, pulled tendons, and fractured fingers for the long awaited and eagerly anticipated rights to the war trophy that would grant the best team the privilege to bear the title of “NCAA 2008 Champions” and earn them the rights to private and public gloating until next year’s march to madness starts all over again.<br /><br /><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a>Basketball fans, ball players and tall athletic men admirers will be on hand to experience this most exciting showdown in NCAA basketball. From miraculous buzzer-beaters to unforgettable upsets, nothing compares to the Madness of the Final Four. And the fun doesn’t end there. The road to the final championship game is paved with award ceremonies, block parties, concerts, tournaments, clinics, rallies and the annual Hoop City and Big Dance events. It all culminates on April 7th in a testosterone filled arena where the doors to the Alamodome open to a sold out crowd of screaming spectators garnished in their teams colors and branding the mascot logo’s that mark them as #1 fans. From Kansas Jayhawks seat cushions to Memphis Tigers custom made infant outfits, the final four teams (UNC, KANSAS, MEMPHIS & UCLA) will be represented across sports bar and living room flat screens nationwide. The young, old and wildly fanatic will be fighting for victory and rooting for their favorite college teams as the blow of the empire's whistle marks the beginning of the end for the weaker team, and the rise to glory for the next 2008 NCAA National Champions.<br /><br /><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a>Only two of the final four teams will take center court in the days to come; and while the team colors of those that have fallen may start to fade as the process of elimination gets underway, J&O Fabrics promises to keep their name and team spirit alive in our selection of college team cottons, fleeces, and vinyl prints long after the last cheer has been shouted and the last confetti has dropped. </div></div><br /><div><br /><div><br /></div><br /><div align="center">March with us into the madness<br /></div><br /><div align="center"><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><em>When</em>: <strong>Monday April 7, 2008</strong></div><br /><br /><div align="center">Game time @ 8:21pm<br /></div><br /><div align="center"><em>Where</em>: <strong>San Antonio, Texas Alamodome</strong><br /><br /></div><br /><div align="center"><em>Why:</em><strong> To see all J&O College fabric fashionistas ...of course!</strong></div><br /><br /><br /><br /><div><em>Check out our selection of </em><a href=""><em>college basketball cotton fabric </em></a><em>here!</em></div><br /><br /><div><em>Check out our selection of </em><a href=""><em>college basketball fleece fabric </em></a><em>here!</em></div><br /><br /><div><em>Check out our selection of </em><a href=""><em>college basketball vinyl </em></a><em>here!</em></div><br /><br /><div><em>Check out our selection of </em><a href=""><em>novelty basketball fabric </em></a><em>here!</em></div><br /><br /><div><em>Check out our selection of </em><a href=""><em>NBA cotton fabric </em></a><em>here!</em></div><br /></div></div></div>jandofabrics Your Fabric Color Says About You!<a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> Color is ubiquitous. It is everywhere and in everything. Color is mood altering, energy balancing, absorbing, radiating, enhancing and subduing. And while different cultures place symbolic attributes and meanings on specific colors, scientific research in neurology, psychology & ophthalmology have preliminary evidence that the effects of color do not solely depend on cultural associations, but more importantly, are based on the fact that the human eye perceives color through sensors that are sensitive and responsive to light. This explains why both sighted and blind respond to the color blue, as well as why both adult and child with different nationalities are easier to lose their tempers when in a yellow colored room.<br /><div><div><div><div><br /><div>Several findings indicate that color and light have even been used as a source of healing since the beginning of recorded time as well. When you absorb color energy, it travels through the nervous system to the part of the body that needs it. Each body has its own optimum state of well being and is constantly seeking ways to maintain or restore a balanced state. Utilizing color is one way to help our bodies maintain this harmonious state. </div><br /><div>In ancient Indian, Chinese & Egyptian cultures, health related treatments were based around the theory that health was not only contingent upon balancing our physical needs, but our emotional, mental and spiritual as well. And color application helped to do that. Feng Shui, chromo therapy and colorology are a few other healing arts based around aspects of the same theory.</div><br /><div>Color and light have the ability to balance the energy wherever a person’s body may be lacking. In the ancient Indian healing art of the Kundalini System, colors affect specific energy centers in our body temples that help our seven chakras open & flow freely. These energy centers govern specific organs in our body temples and vibrate to specific colors that can be reenergized through visualization and application.<br /></div><div><strong>Crown /Head chakra violet<br />Third Eye chakra – indigo<br />Throat chakra- blue<br />Heart chakra- green<br />Solar Plexus chakra-yellow<br />Sacred Plexus chakra-orange<br />Root/ Base chakra-red</strong></div><br /><div>Color has the same ability when applied to the paint & décor of a room. Whether it’s the color of the walls, a piece of upholstered furniture, the window treatment, or a whole décor theme, color gives a room its look and feel and will therefore bring a positive or negative emotion to the person who enters it. </div><br /><div>In the ancient Asian concept of the Feng Shui Five-Element Theory, every color is represented. These five elements are fire, water, metal, wood, and earth. Reds, oranges, brilliant yellows, pinks, and purples represent the element of fire, aligning with passion, danger, and a high energy level. White, gray, silver, and gold colors relate to the metal element and should be used as an accent rather than a main color, as they represent clarity and balance. The colors blue and black are associated with the water element and call forth freshness and abundance. Black in used in moderation when decorating a child's room because of its absorbent properties. Green and brown colors relate to the wood element and provide qualities of health and prosperity. Pale yellows and beige colors are related to the earth element and provide a strong, steady, and stable atmosphere. While it is mindful to pay attention to the use of these strong colors when creating or diffusing energy in your home or office, pastel colors can be used more freely. Pastels are moderate colors that do not represent any energy inhibiting dangers.<br /><br />Have you ever noticed how certain colors make you feel? How that certain dress or suit seems to make your face glow a little brighter? How brown wall paneling or red drapes affects your mood? Color truly does affect us more than we know. But the more we understand how we process and view color, the more we can manipulate its usage through our choice of fabrics. </div><br /><div>Below is a list of some of the more common colors found not only in nature and the man made world around us, but in the fabrics that we purchase for the purpose of creating the very look or feel we are speaking of. When you think of it, fabric can truly be used to convey a wealth of energy and information into a garment or home décor. From its color and texture, to its print and design, fabrics speak volumes without making a sound. Hopefully with this additional knowledge and deeper insight into the vibrations around each color, your next fabric selection will hold a deeper purpose and meaning.</div><br /><div><strong>Red<a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a></strong></div><br /><div>Red is an emotionally intense color, evoking energy and desire. It represents the life force contained in our physical bodies. Red attracts attention whether draped on a female form or coveting your plush sofa cushions. But be careful, red can make you appear heavier and raise your blood pressure as well. </div><br /><div><em>Exam</em>: red cars, red shoes, red light district, red stop signs, seeing ‘red’, painting the town ‘red’, red devil</div><br /><div><strong>Orange</strong></div><div>Orange is an anti-depressant and appetite stimulator. It vibrates with our emotional sides and is reflective of a warm hearted disposition. Many fast food industries effectively use this color in their marketing strategies for this very reason.<br /></div><div></div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><strong>Yellow </strong><br /><div>Yellow is the most difficult color for the human eye to take in, yet it stimulates our minds and helps us to focus, boosts our memory, and enhances our concentration. Maybe this is why legal pads and post-its are marketed by the industry on yellow paper. While some say that yellow is an optimistic color (as in the yellow ribbon put out when soldiers are at war) adults and babies have been documented to lose their temperatures more in rooms coated in this color.<br /><br /><strong>Green</strong><br />Green carries a harmonizing and relaxing energy. It is the color of nature, fertility, creativity, wealth and good fortune. For women especially, it serves as a womb rejuvenator; stimulating the energy flow of the chi force. Because of greens calming effects, it is the color of choice for most doctor scrubs, hospital waiting rooms, and the green rooms that performers use prior to putting on a show. </div><br /><div><strong>Blue</strong><br />Like the many shades of green, blue also has a calming and tranquil affect on the mind and body. And while studies have found that people tend to be more productive in rooms painted blue, it is synonymous with a peaceful state. Fashion stylists recommend wearing shades of blue to job interviews because it symbolizes loyalty (as in the phrase, he is a ‘true blue’ friend). Many interior designers make use of the many shades of blue and green in their bathroom and spa decors.<br /><br /><strong>Purple<br /></strong>Purple is a very powerful color. It is the color or royalty and the finer things in life. It reflects a high sense of self both mentally and spiritually, and nurtures creativity.</div><br /><div><em>Exam:</em> purple heart of honor<a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a></div><br /><div><strong>Turquoise</strong><br />Turquoise and silver metals are synonymous with the Native Americans who used their natural qualities of protection and dedication for symbolic purposes in many of their jewelry pieces and body adornments. Turquoise stirs the imagination and stimulates concentration as well.<br /><br /><strong>Pink</strong><br />While red is the color of desire, pink is the color of love void of this desire. It is romance and affection and symbolic of universal love. Pink is flirtatious, yet carries a calming effect on our overall disposition. A case study done in a state prison years ago found that when the bright orange colored uniforms the prisoners wore, were replaced with pink ones, the number of fights and overall level of aggression was reduced. In western culture, pink is symbolic of girls mainly because of the sweetness little girls were conditioned to possess.</div><br /><div><strong>Black<br /></strong>Though not truly considered a color, black is still considered very symbolic in many cultures. Most notably symbolic of things dark and mysterious in western cultures, black is often worn by funeral attendants and those in morning. Black is also the color most associated with power and authority. In the fashion industry, black is considered a stylish and timeless color that has not only a thinning affect on the body, but is great for subduing undesirable physical attributes. Because it absorbs energy and light strongly, one must be mindful when furnishing or garmenting too heavy in black.</div><br /><div><em>Exam</em>: black tie event, black magic</div><br /><div><strong>White</strong><br />In many cultures, white is the color of innocence and purity. On the color spectrum, it is considered a neutral color and reflects light as well as energy. In many eastern cultures, it is worn to funerals and other spiritual ceremonies. White is a very revealing color, and as such, requires an attentive eye when used to upholster, garment and window dress.</div><br /><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /><strong>Brown</strong><br />Brown is an earthy color representing the natural qualities of Mother Nature. It is considered to be a conservative color and a favorite of many men.<br /><br /><div><strong>Grey</strong><br />Grey is the symbolic color of compromise. It denotes renunciation and suppression; this may explain why older depictions of monks and nuns show them with grey colored robes on. If you are looking to liberate yourself from a certain way of thinking, doing or experiencing, grey is the last color you want to surround yourself with. <a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a></div><br /><br /><br /><div><strong>Lavender<br /></strong>Lavender symbolizes vanity and ultra-femininity. A perfect match for the female essence in all little girls and seasoned women.</div><br /><br /><br /><br /><br /><div align="center"><em><strong>What do the primary colors you choose to garment your body and home with say about you?</strong></em></div><br /><br /><br /><br /><div><em>Check out our selection of <a href="">novelty fabric </a>here!<br />Check out our array of solid colored <a href="">dress fabric </a>here!<br />Check out our selection of <a href="">upholstery fabric </a>here!<br />Check out our array of colorful <a href="">broadcloth</a> solids here!</em></div><br /><div><em>Check out our selection of colorful <a href="">decorative fabric </a>here!</em></div><br /><br /><div>Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div><br /><br /><br /><br /><div></div></div></div>jandofabrics and Barack in Black & White.<div><br /><div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a>The latest consensus in the Presidential Campaign between Hillary Clinton and Barack Obama seems to come down to a power struggle between <strong><em>black & white</em></strong>……..clothing that is.<br /><br /><div>In the matter of color, there is something to be said about the energy that colors carry and omit. Colors have symbolic meanings depending on various theorists and cultures. In the world of fashion and interior design, colors play an intricle part in enhancing or subduing our assets and flaws. It can shape the mood of the wearer, as well as the environment it covets. Most importantly, specific colors have been conditioned in our minds to be viewed as a source of power and influence, molding the perceptions of both the individual and the onlooker. Such is the case with <strong><em>black & white</em></strong>.</div><br /><div>Let’s take <strong><em>black</em></strong> for instance. <strong><em>Black</em> </strong>is the ultimate dark. It is considered to be a conservative and conventional color. It can be serious and sophisticated, yet sexy and mysterious. On the streets, <strong><em>black</em></strong> is the sign of the rebel. In the corporate world, it connotes a sense of a very important person. For most, <em><strong>black </strong></em>is undeniably the <em>power</em> color of choice. And in the western battle of good vs. evil….<em><strong>black</strong></em> is always depicted as the bad guy. </div><br /><div>When we see Hillary or Barack in <strong><em>black</em></strong>, we tend to see them as serious contenders in the fight for the win. We experience through our senses, their strong, confident and protective armor of <strong><em>black</em></strong> at work. Depending on the media portrayal of the debate at hand, <strong><em>black</em></strong> can have a positive or negative effect on the nation’s image of the candidate. And they know it too.</div><br /><div>On the opposite end of the spectrum, we have the color <strong><em>white</em></strong>. As Americans, we have been conditioned to believe that <strong><em>white</em></strong> is the color of everything innocent, honest and pure. Doctors, healers and sages wear white, as well as first time brides. <strong><em>White</em></strong> is considered brilliant and angelic and for some, a <em>power</em> color that invokes confidence and assurity on all levels. In the western battles of good vs. evil…..<strong><em>white</em> </strong>denotes the good guy, and good guys always win. Hillary and Barack know this as well and wear it strategically when necessary.</div><br /><div>Color is everywhere, and image is everything. As we watch the debates, view highlights leading up to the elections, and <em>Google</em> Hillary and Barack in our attempts to catch up with the latest news in this history making moment of time, take a second to check out the number of occasions, and location of events, that they choose to wear their <strong><em>black </em></strong>or <strong><em>white</em></strong>. I am sure they are as consciously aware of their choices, as we are subconsciously aware of the affects those color choices have on our perceptions of each of our potential Presidents, as well as the in the world around us. </div><a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><br /><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><br /><br /><br /><div align="center"><strong>What is your power color?</strong></div><br /><div align="center"></div><br /><br /><div><em>Check out our full selection of <a href="">fabric</a> here!<br />Check out our selection of new <a href="">black fabric </a>here!<br />Check out our selection of new <a href="">white fabric </a>here</em>!</div><br /><div><em>Check out our selectionof <a href="">black wool suiting fabric </a>here!<br /></em></div><br /><br /><br /><br /><div></div><br /><div><em>** For more on colors and their symbolic meanings, qualities and energies stayed tuned......</em></div><br /><br /><br /><br /><div><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div>jandofabrics Around The Town. Happy Sewing!<div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> Just as Santa Clause represents Christmas, a hopping life-size bunny with a basket full of colorful eggs is the quintessential image of Easter.<br /><br />But how did the Easter bunny and egg hunts become synonymous with Easter anyway? And how are these various images connected to the Christian version of the holiday? Aside from the historical findings that the hare, eggs and the Prophet Yahshua’s resurrection from the dead are all symbols of ‘fertility and life’, I cram to see any other relation.<br /><br />In fact, my earliest memories surrounding Easter period, remind me of a haute couture fashion show on the runways of the House of God. The biggest question was not whether or not today would be the day you asked for forgiveness for the time you cheated on your math test, but whether or not today would be the day you wore your new <a href="">royal crepe back satin </a>dress with the coordinating clasp styled hand bag. Between the spirited cat walks up the isles from the young ladies in their <a href="">embroidered brocades</a>, silky <a href="">chiffons</a> and elegant <a href="">furs</a>, to the young men in their raw <a href="">linens</a>, <a href="">suiting</a> fabrics and stylish <a href="">suedes</a>. The after service gossip overshadowed the sermon this Easter Sunday. Even more important was whether or not Sister Victoria’s Easter bonnet would outdo Sister Evelyn’s for the second year in a row. It never ceased to amaze me how she was able to keep that two-foot wide <a href="">lilac gingham check printed </a>hat with floral trim positioned so neatly while catching the Holy Spirit at the same time. I wouldn’t be surprised if TV celebrity Tyra Banks began her modeling career and got the inspiration for her current hit show ‘America’s Next Top Model’, while sitting in the pews of this very church as a child. <a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><br /><br /><br /><div>Whether this holiest of Christian Holidays finds you decorating baskets for your yearly Easter egg hunt, decorating your dinner table for the traditional family Easter dinner, or finding just the right <a href="">sparkle organza </a>to match that brand new <a href="">pink lame</a> fabric you bought for your special Easter Sunday dress, let <strong>J&O</strong> be your <strong>#1</strong> source for the perfect Easter holiday fabric.<br /></div><br /><br /><br /><br /><div><em></em></div><br /><div><em>Check out our <a href="">Easter fabric </a>here!<br />Check out our selection of <a href="">Religious fabric </a>here!<br />Check out our <a href="">dress fabric </a>collection here!<br />Check out our <a href="">novelty fabric </a>section here!</em></div><br /><br /><br /><br /><br /><div>Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div>jandofabrics Fleece is a Hit!<div><br /><div align="center"><br /><a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><em><span style="color:#ff0000;">Take me out to the ball game.<br />Take me out to the crowd.<br />Buy me some J&O fabric.<br />I don’t care if we never shop anywhere else </span></em></div><br /><div align="center"><em><span style="color:#ff0000;">‘cause it’s the best fabric store in the world!</span></em></div><em><span style="color:#ff0000;"></span></em><em><br /><br /><div align="left"></em>With opening day less than 20 days away, and the cold weather still lingering in the midst, MLB (major league baseball) fans are eagerly gearing up for another round of submarine style pitches, sneaky stolen plates and out-the-park home run hitters from their favorite teams. With all the excitement of the spring season ahead, it’s easy to overlook some of the simple things that will make the long innings on hard stadium seats a little more comfortable.</div><br /><br /><div align="center"><em><span style="color:#ff0000;">It's root, root, root for the home team.</span></em></div><br /><div align="center"><em><span style="color:#ff0000;">I don't care if they loose.</span></em></div><br /><br /><div>Whether you’re rooting from the plastic seats of the ball park or from the soft cushions of your favorite couch, our MLB fleeces can offer that extra bit of softness and warmth to go along with the rooting fans and the foot-long hotdogs that make a day at the park all worthwhile.<br /></div><br /><div>At J&O, we carry some of the coolest MBL fleece fabrics for our baseball fans nationwide. With a few yards of material and a little creativity, your team spirit can be expressed in the form of a simple logo filled craft project to carry or wear when the pitcher takes the mound.</div><br /><br /><div>If you’re a New York Mets fan, make great seat cushions for the ‘fanny’ in you. If you are a diehard Philadelphia Phillies fan, gather your family and lay a home-made fleece throw across your laps or around your shoulders to keep the nippy nights under the big lights a little more special. And if you’d rather be watching dare devil men speed around and around the Nascar track as screaming fans cheer on instead, but you have a knack for crafts too, then gather a yard or two from each teams fleece to make unique gifts for friends and loved ones that they will surely cherish even after the last game has been won.</div><br /><br /><div></div><br /><div align="center"><em><span style="color:#ff0000;">Yes it's one, two, three strikes you’re out, at the ole ball game!</span></em> </div><br /><br /><div>Out of fabric that is, if you wait too long. </div><br /><div>Hurry before it's all gone. </div><br /><div>Order your MBL fleece today!</div><br /><div><br /></div><br /><div align="center"><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> <a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a></div><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><br /><br /><br /><p align="left">Check out our full <a href=""><em>MLB fleece</em> </a>fabric section here!<br />Check out our <a href=""><em>baseball novelty fabric</em> </a>selection here!</p><br /><p align="left">Check out our other <a href=""><em>fleece fabric</em> </a>here!</p><br /><p align="left">Check out our full <a href=""><em>novelty fabric</em> </a>section here! </p><br /><br /><p align="left"><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a></p></div>jandofabrics Century Designers Rule! Vintage Fabric Rocks!Mid 20th Century art meets fabric with these legendary artists and designers. If you’ve recently reupholstered your sitting room chairs or engaged in the arduous yet fulfilling task of selecting fabric for a long overdue window treatment, then most likely you have met some of our fine textile designers along the way…..at least on fabric.<br /><div><div><div><div><br /><a href="">Geometri Lilac</a>, <a href="">Quatrefoil Violet</a>, <a href="">Circles Khaki</a>, and <a href="">Pavement Rust</a>; better known as Verner Panton, Alexander Girard, Earnes and George Nelson. And while we may not be able to introduce you face to face, we’d like to take a moment to give you a little background history on our famous upholstery and decorative designers.</div><br /><div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" height="83" alt="" src="" width="99" border="0" /></a><strong>Verner Panton</strong> (Feb 1926 – Sept 1998) is considered one of Denmark’s most influential 20th century furniture and interior designers. During his career, he created innovative and futuristic designs in a variety of materials and vibrant colors. Panton’s most well-known furniture models are still in production today.</div><br /><br /><div>In the late 1960’s and early 1970’s, Verner Panton experimented with designing entire environments. Creating radical and psychedelic interiors by combining his hand crafted curved furniture, wall upholstery, textiles and lighting, he utilized circular patterns and cylindrical furniture that he became famous for in the years after his death.</div><div><br /><strong><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a>George Nelson</strong> (1908-1986) was one of the founders of American modernism. He was born in Hartford Connecticut and studied architecture at Yale University where he received his bachelor degree in the fine arts. He went on to become a writer and would often times argue with ferociousness, the modernist principles that often times offended industrial designers of that time.</div><br /><div>By 1940, George Nelson had drawn popular attention with several innovative concepts including the “family room” and the “storage wall”. He designed under Herman Miller for many years after, and was said to be one of the most eloquent voices on design and architecture in the USA of the 21st century.</div><br /><div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><strong>Alexander Girard</strong> (1907-1993) was one of the foremost designers of American textiles in the 20th Century. His work is characterized by lively, bold and colorful geometric patterns. He created fabrics for Herman Miller that were used on the designs of Charles and Ray Earnes.</div><br /><br /><div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><strong>Charles and Ray Earnes</strong> were a husband and wife design team full of boundless enthusiasm. Their unique synergy led to revolution of sorts and a whole new look in furniture. Lean and modern, playful and functional, sleek, sophisticated, and beautifully simple summed up their signature pieces. It was the ‘Earnes look’.</div><br /><div>That look and their relationship with Herman Miller created what would later become the world renowned Earnes lounge chair. Charles and Ray achieved their monumental success by approaching each project the same way: <em>Does it interest and intrigue us? Can we make it better? Will we have "serious fun" doing it?</em> They loved their work, which was a combination of art and science, design and architecture, process and product, style and function. "The details are not details," said Charles. "They make the product."</div><br /><div>Each of these architects and designers from the Mid 20th Century has transferred their artistic talents and gift of eye to the fabric textile industry, creating simple yet modern prints and designs for both upholstery and decorative collections. At <strong>J&O</strong>, we are proud to offer these timeless prints from truly remarkable artists with innovative ideas and vision. </div><br /><div>With an Alexander Girard or Verner Panton designer print, you are not only getting top shelf fabric, you are taking home a piece of art and history as well. </div><br /><br /><div><em>Check out our <a href="">Verner Panton fabric </a>collection here!<br />Check out our <a href="">George Nelson fabric </a>collection here!<br />Check out our <a href="">Alexander Girard fabric </a>collection here!<br />Check out our <a href="">Charles and Ray Eames Fabric </a>collection here!</em></div><br /><br /><br /><div>Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div></div>jandofabrics"There's No Place Like J&O." Wizard of Oz Fabric.In the 1939 American musical fantasy film <strong><em>The Wizard of Oz</em></strong>, Dorothy is accompan<a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>ied by a heartless scarecrow, a brainless tin man and a cowardly lion on her way to the Emerald City in search of the magical remedy that would fill the self perceived voids within her and her new friends, and transport her back home where a loving and protective family awaited her.<br /><div><div><div><div><div><br /><div>Each character believed that what they desired to obtain most for themselves, could onl<a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>y be granted and given to them by another. When in truth, each individual was whole and complete within themselves….if only they believed.</div><div> </div><div>The Wizard of Oz became one of the most beloved films of all times, with the movies’ theme song, Over the Rainbow, becoming a memorable song of inspiration and hope for the dreams and aspirations of Americans around the country<a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a> .</div><div> </div><div>And now this wonderfully symbolic and culturally significant work of art can be yours to fashion and recreate with the same imagination that spirited this wonderful movie, as only you can. </div><div> </div><div>The Wizard of Oz has history and tells a story, encouraging a strong mind, a giving heart and a <a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" height="74" alt="" src="" width="101" border="0" /></a>courageous nature respectively. </div><div><br />What story does the fabric you have on hand tell about you?<br /><br />And how does it symbolically reflect various aspects of your spirit and personality?<br /></div><div><em>Stay tuned</em> .......</div><div><br />To find out where this path leads, just follow the yellow brick road.<br /></div><div>What your favorite selections say about you may be deeper than you know.<br /></div><div>In the meantime, you don’t have to wait on a fairy princess and ruby reds to make your fabric wish come true, with the click of a button you can check out all the magical fabric we have to offer right here at J&O.</div><br /><div><br /></div><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><br /></div><br /><div><br /><div><em>Check out our <a href="">Wizard of Oz fabric </a>collection here!<br />Check out our Novelty <a href="">cotton fabric </a>here!<br />Check out all our <a href="">new fabrics </a>here!</em> </div><br /><br /><br /><div>Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div></div></div></div>jandofabrics Licensed Tom & Jerry Fabric Prints!<div><br /><div><a href=""><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a><br /><div><div><br /><div>In the game of love, there are those who like the chase. Then, there are those who like to be chased. None a love affair was ever so revered and watched with curious amazement than that of our famous cat and mouse duo of Tom & Jerry. </div><br /><div>After almost 65 years, they are still playing the game they made famous….the game of cat and mouse. And they are playing it just for you.</div><br /><div>Hanna Barbera’s famous cartoon characters of Tom & Jerry are now available on 100% cotton fabric for our nostalgic viewing pleasure. Each print captures a scene of love vs. war that these two became known for, as we watched day in and day out for the cat to not only catch, but eat the mouse he so aggressively sought to destroy. But through all the mayhem and destruction, Tom never did. And what seemed even stranger, was that when the two finally did find themselves in the stranglehold embrace of one another, they almost seemed, well, ….friendly.</div><br /><div>So why did Tom chase Jerry? Was it a false sense of duty instilled by his full-figured, overly stressed and frustratingly unidentifiable owner of earlier years? Was it the response to a normal feline/rodent enmity? Or were they merely being paid to act out these violent & theatrical gags that never produced blood despite their mutating affects, for the entertainment of curious and bored little boys and girls looking to live out their frustrations of life vicariously through the maniacal duo? </div><br /><div>As children, we came to the simple conclusion that this is just what cat and mice did. They chased, to chase another day. Tom was clever, yet stupid and Jerry was an instigator who liked taunting Tom. And it was funny and it was make believe. And in the end, they were friends just doing their job…no harm, no love lost. </div><br /><div>As adults we look back and laugh with a sigh of relief for all the deadly reenactments we didn’t choose to commit on siblings, and the nostalgic memories of a simpler day left behind.</div><br /><div>The only question remaining: Who did you vote for, Tom or Jerry?</div><br /><div></div><div><em>Check out our new </em><a href=""><em>licensed cartoon fabric </em></a><em>here.</em></div><br /><div><em>Check out our selection of cotton </em><a href=""><em>novelty fabric </em></a><em>here!</em></div><br /><div></div><br /><br /><div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a></div><a href=""><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><br /><br /><br /><div>Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div></div></div></div>jandofabrics"The Raving Reds of the 2008 Oscars!"<a href=""><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /></a> The carpet wasn’t the only one wearing red at this years’ Oscar’s. Notable film and television favorites came dressed to impress in what some say was just what the Oscar’s needed to jolt the dreary energy of a lingering winter, and the weighted residue from the pending writer’s strike.<br />The lovely ladies of stage and screen came out to warm the blood of men and stimulate the senses of all in their one of a kind, custom made and right of the runway designer gowns from such fashion icons as Valentino, Galliano, Escada and Kevan Hall. From the legendary <em>Ruby Dee</em> , suited in a ruby red satin gown complete with rhinestone belt, to new comer <em>Miley Cyrus</em>, in a red chiffon bow backed number, it was clear that red was the undisputed color of choice, and these diva’s worked their fashionable garments as only they could.<br /><div><br /><br /><div><em>Check out our collection of <strong><a href="">taffeta</a></strong> here!<br />Check out our collection of <strong><a href="">sequined fabric </a></strong>here!<br />Check out our selection of <strong><a href="">chiffon</a></strong> here!<br /></em><br />Actress <em>Helen Mirren</em> donned a garnet satin dress, while <em>Anne Hathaway</em> adorned herself in a crimson red satin one shoulder by Marchesa complete with rosette embellishment. <em>Kathy Heigl</em> looked liked she walked right off the set of a Harlequin romance in a beautiful silk georgette column dress with a draped and pleated bodice. The red affair was completed with a haute couture red silk taffeta bustier gown by Galliano that was strutted in true runway style by none other than super model icon herself, <em>Heidi Klum.</em> </div><br /><br /><div>With prom season right around the corner, and J&O right at your fingertips, you can take a little bit of creative inspiration and glamour from the red carpet today!</div><br /><br /><div><em>Check out our full section of <a href=""><strong>dress fabric</strong> </a>here!</em></div><br /><div><em>Check out our full section of <a href=""><strong>decorative fabric</strong> </a>here!</em></div><br /><div><em>Check out our full section of <a href=""><strong>satin fabric</strong> </a>here!</em></div><br /><br /><br /><div></div><img style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="" border="0" /><br /><br /><br /><div><br /><br />Posts by J&O <a href="" rel="tag">Fabrics Store</a></div></div>jandofabrics Street to the Rescue!<div><div><br /><br /><div><a href=""><img style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> Where would we be without Sesame Street? Probably in our graves.<br /><div><div><br /></div><div>My father once thanked the producers of Sesame Street for saving the lives of many children, including his own….literally. He was half serious, half joking. He spoke these words on behalf of parents nationwide who undoubtedly would have grown a few more gray hairs and raised their hands to fall upon a few more tender a rear end had it not been for the regularly scheduled, educational and highly interactive broadcasting of this wonderful children’s television program.</div><div><br /></div><div>Weekday & Saturday mornings, children from every nationality and background across America could be found glued to the tube, singing along with their favorite Sesame Street characters in the customary call and response fashion that Grover, Oscar and now Elmo encouraged. </div><div><br /><img style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" />With the click of a button, crying infants found instant gratification in Maria’s soft ABC lullabies’. The Terrible Two’s toddlers put their home destroyer gadgets away and suddenly became the sweetest, most focused angels to roam the living room. Even moms & dads could be found learning spanish words for water, or joining their offspring in finding which of these things just didn’t belong here.<br /><br /></div><div>So in ode of the good-ole-days, the lives of the little ones who have absolutely no idea just how intricate their entertaining TV show really was to their
http://feeds.feedburner.com/JAndOFabrics
crawl-001
refinedweb
12,531
51.07
Using IronPython for Dynamic Expressions. We recently had this question posted to our forums over at LVS: Dear Forum Experts: I am looking for very specialized solution: I have various Items which I store into a table in a Relational DB. I would like to do a custom calculation, specific for each item at it's instance. Because the calculation is specific for the item, and items are soo many I wold like to store the calculation formula into a relational DB. The problem is to convert the string of formula into a real programming command and to actually perform the calculation. I do not want to use Excel or additional software in order to gain calculation speed e.g. ItemID = 5001, ItemSize = "a - b" ItemID = 5002, ItemSize = "a - 2*b" ItemID = 5003, ItemSize = "a + b" So, ItemSize is actually the formula expression that would calculate various instances of a and b variables ... I have tryed this: int a = 10; int b = 5; string formula = "a + b" // This comes from ItemSIze of DB,SQL, etc. int Result = a + b; // This is a second line for test only - hard coded... int CalcResult = int.Parse(formula); //I wish this was working ... MessageBox.Show(Result.ToString()); // This works ... MessageBox.Show(CalcResult.ToString()); // Never got that far. The result will be stored in different DB with the instances of a and b. Could you please post any information on how should I approach this problem. Thanks a lot. Several options immediately came to mind: code up a simple expression interpreter, evaluate the expression with dynamic SQL (yuck), use lightweight code gen. Then I remembered this little thing I saw at last years PDC called IronPython. Solving this problem with IronPython was "like butta". using System; using System.Collections.Generic; using System.Text; using IronPython.Hosting; namespace PythonDemo { class Program { delegate int MyExpressionDelegate(int a, int b); static void Main(string[] args) { PythonEngine pe = new PythonEngine(); MyExpressionDelegate expression = pe.CreateLambda<MyExpressionDelegate>("a + b"); int a = 10; int b = 5; int c = expression(a,b); Console.WriteLine(c); } } } That's all there was to it! The API for the PythonEngine was very intuitive. I could immediately see where and how I could integrate this with any number of applications that I've worked on in the past. Tip of the hat to the IronPython guys! Now I haven't tested this against a simple interpreter but I would imagine as long as you are smart and keep a cache of the expressions and don't re-parse them every time that it would perform just as well as any interpreted solution if not better. Just follow the make it work, make it work right and make it work fast model and you'll be ok. I wonder if this would also be possible by referenceing the PowerShell runtime. I'll have to take a look at that next and see how it compares. P.S. Microsoft, if you're listening, please include IronPython in the Orcas/NETFX3.5 release! :) I'd love to see IDE support for python scripts and such.
http://weblogs.asp.net/dfindley/Using-IronPython-for-Dynamic-Expressions_2E00_
CC-MAIN-2014-41
refinedweb
515
65.62
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: Any guidance on when to use self. ? Hi, I see some code in strategy as follows: def __init__(self): # To control operation entries self.orderid = None # Create SMA on 2nd data sma = btind.MovAv.SMA(self.data1, period=self.p.period) # Create a CrossOver Signal from close an moving average self.signal = btind.CrossOver(self.data1.close, sma) I wonder why the sma isn't prefixed by self. while signal is. Is there any guidance tell us when to use self or not? When to use self is a general python coding issues and you can google or youtube it to learn it's application. If you get stuck after that let us know. Got it. I misunderstood something stupidly.
https://community.backtrader.com/topic/2737/any-guidance-on-when-to-use-self
CC-MAIN-2020-34
refinedweb
140
68.57
0 I am a uni student just starting out with java and getting more confused by the day. Can someone tell me what I have done wrong? It should input all of the strings and place them in the appropriate place in the bottom sentence. import java.util.Scanner; // Needed for the Scanner class /** * This program consist of just a single class. */ public class Words { public static void main(String[] args) { // Create a Scanner object to read input. Scanner keyboard = new Scanner(System.in); // output program's purpose System.out.println("This is the `Word Game'\n"); // Get the first name of your unit coordinator System.out.print("Please enter the first name of your unit coordinator: "); String coordinator; // name of the coordinator coordinator = keyboard.next(); // User to their first name System.out.print("Please enter your first name: "); String name; // your name coordinator = keyboard.next(); // Use to enter the name of a peice of food System.out.print("Please enter the name of a peice of food: "); String food; // the name of the food coordinator = keyboard.next(); // Use to enter a number between 37 and 42 System.out.print("Please enter a number between 37 and 42: "); String number; // your number coordinator = keyboard.next(); // Use to enter an adjective System.out.print("Enter an adjective: "); String adjective; // your adjective coordinator = keyboard.next(); //Use to enter a colour System.out.print("Please enter a colour: "); String colour; // your colour coordinator = keyboard.next(); //Use to enter the name an animal System.out.print("Please enter the name of a animal: "); String animal; // an animals name coordinator = keyboard.next(); //Display full paragraph with string data System.out.println (Dear " + coordinator, + " I am sorry that I am unable to turn in my assignment on time. First, I ate a rotten " + food + ", which made me turn " + colour + " and extremely ill. I came down with a fever of " + number + " degrees. Next," + my adjective + " pet " + animal + " must have smelled the remains of the " + food + " on my assignment, because he ate it. I am currently rewriting my assignment and hope you will accept it late. Sincerely, " + name + " "); } // end of main } // end of class Match Edited by peter_budo: Keep It Organized - For easy readability, always wrap programming code within posts in [code] (code blocks)
https://www.daniweb.com/programming/software-development/threads/265336/where-am-i-going-wrong
CC-MAIN-2017-34
refinedweb
376
59.3
Introduction What will we do in this article? Step 1:- Ensure you have things at place Step 2:- Create a web role project Step 3:- Specify the connection string Step 4:- Reference namespaces and create classes Step 5:- Define partition and row key Step 6:- Create your 'datacontext' class Step 8:- Code your client Step 9:- Run your application.NET , SQL Server , WCF , WPF , WWF , Silver light , Azure @ . We will create a simple customer entity with customer code and customer name and add the same to Azure tables and display the same on a web role application. 'connectionstring'. We also need to specify where the storage location is , so select the value and select 'Use development storage' as shown in the below figure. Development storage means your local PC currently where you Azure fabric is installed. If you open the 'ServiceConfiguration.cscfg' file you can see the setting added to the file. In order to do Azure storage operation we need to add reference to 'System.Data.Services.Client' dll. Once the dlls are referred, let's refer the namespaces in our code as shown below. Currently we will store a customer record with customer code and customer name in tables. So for that we need to define a simple customer class with 'clsCustomer'. This class needs to inherit from 'TableS. The next step is to create your data context class which will insert the customer entity in to azure table storage. Below is the code snippet of the data context class.The first noticeable thing is the constructor which takes in location of the credentials. The second is the 'Iqueryable' interface which is used by the cloud service to create tables in azure cloud service. In the same data context we have created an 'AddCustomer' method which takes in the customer entity object and call's the 'AddObject' method of the data context to insert the customer entity data in to Azure tables. The next step is to create the table on the 'onstart' of the web role. So open 'webrole.cs' file and put the below code on the 'onstart' event. The last code enclosed in curly brackets gets the configuration and creates table's structure. in to the table. //Loop through the records to see if the customer entity is inserted in the tabless foreach (clsCustomer obj in customerContext.Customers) { Response.Write(obj.CustomerCode + " " + obj.CustomerName + ""); } It's time to enjoy your hard work, so run the application and enjoy your success. You can get the source code of the above sample from bottom of this article Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/kb/1374-9-simple-steps-to-run-your-first-azure-table.aspx
CC-MAIN-2017-09
refinedweb
445
63.19
Uncyclopedia:UnSignpost/Press Room/4 From Uncyclopedia, the content-free encyclopedia Oh yeah, you should probably include This at some point. – Sir Skullthumper, MD (criticize • writings • SU&W) 03:05 Jul 06, 2008 - Add it yourself you lazy bastard. In case you havent noticed, we havent sent the thrusday paper cause nobody but me has writen anything, and i dont like to write much. FUCK DAMN! 76.110.94.210 03:31, 6 July 2008 (UTC) - I'll write something, maybe. where can I see the issue that is being worked on/post my contributions? -- wet Ape (negate) (Riot Porn) 20:38, 7 July 2008 (UTC) - The latest issue is here. I've kinda added more stuff and got it ready to go this week, I hope, but please roll your sleeves up and get stuck into next week's as much as you want! The easiest way to find the latest issue is probably to use Special:prefixindex, and specify namespace Uncyclopedia and prefix UnSignpost, like this, then look for the latest date. --UU - natter 09:29, Jul 9 I rather liked... Olipro's Ban summary; - (Block log); 18:49 . . Olipro (Talk | contribs) (blocked User:70.173.54.200 with an expiry time of 1 week: This is a penis, this is you) Any chance it can be pencilled into the according section? (Bonner) (Talk) Jul 11, 17:53 Fare Hike? OMG FARE HIKE!!!!!!!!!!!!!!111111111 So there's a rumour going around that you guys are inreasing the fare for the UnSignPost? I already pay 0.00000000001 cents for the virtual paper based on internet costs, and I hear you guys want to include another 0.000001 cents on top of that. Is this true?! If so, that is an outrage! -:01, 13 July 2008 (UTC) Article Fail - (Huff log); 00:01 . . Mhaille (Talk | contribs) (huffed "Big borther": content was: 'big brother is a reality tv show which after about 8 weeks everybody has done it with everybody' (and the only contributor was '131.217.6:02, 15 July 2008 (UTC) Porn and that? What about this and the inevitable restoring of the cosmic balance? 16:34 17 July 2008 service offering i am interested in becoming a contributor for your dealie. furthermore, i suggest adding a horoscope section, and i volunteer this page to be the source of said section (maybe one select sign for each issue, similar to the blurb appearing on the right side of the UnNews page?) furtherfurthermore, i am back for good, jerks! -- 02:20, 21 July 2008 (UTC) - Cool, always happy to have more writers involved, particularly those who don't call us jerks. Oh, hang on... Anyway, feel free to write something and drop it in here - I'd be grateful for the help (as would the original editors, who are taking, er, an extended editorial break from UnSignpost journalism. Yeah, that's what they're doing. As to the horoscope, yeah, I might drop it in. Why don't you visit the next issue and drop it in somewhere as you envisage it:42, Jul 21 - sounds like a plan. i'll be adding content to User:Gerrycheevers/usp, feel free to take a look at it and use any content there if you're looking for material. -- 19:57, 21 July 2008 (UTC) excellent hooray! i'm a journalist! i'd love to continue helping you out, UU. might i request the title of junior reporter? - 16:43, 24 July 2008 (UTC) - Well, technically I have no title either, being as Cajek And Skullthumper are still nominally the editors. So either: 1. Go ask them. Or: 2. Sure, knock yourself out. And I'll call myself editor-by-default or something. Or maybe I won't. Anyway, please do keep doing stuff if you want to Gerry, glad of all the help there:15, Jul 24 Olipro strikes again and news story (Block log); 14:58 . . Olipro (Talk | contribs) (blocked User:Cajek with an expiry time of 69 seconds) Also, you guys got to have something in regards to the deleted Rouge the Bat article which was...err...uh....very...uh...hentai-ish? - 15:00, 28 July 2008 (UTC) - OK, well, you write something about it, and if it doesn't suck, it goes in. Suck are the stringent quality control processes here at the UnSignpost. (I so didn't see the Rouge the Bat thing, so anything I write about it would be short, to say the:07, Jul 28 - RC stalker ahoy! If you want, I can restore it to some subpage or something...it'd take about 30:31, Jul 28 WE'RE ALL GONNA DIE! Yes, it's true. Death is inevitable for all users who have logged in recently. Apparently, a rouge admin created a computer virus so potent you actually contract testicular cancer. Real cancer. Like, In real life. Seriously. And how does one contract the virus? Our resident MD warns us that the simple act of logging in to Uncyclopedia will cause one to contract the fatal disease. "Yes, I know cancer isn't a disease nor a virus" said the doctor "but still, it sucks. I mean - it's cancer - IN YOUR BALLS. Come on.". Cancer survivor Lance "Uniball" Armstrong had this to say: "Yeah, it totally sucks." So how exactly does on know when they have. - Other nasty stuff. Make sure to constantly check your testicles by feeling them for irregularities with your fingers. ... That's right. ... Just feel 'em up ... nice ... Ahem! Uh... Simple vigilance is a big help. The best time to check is after a hot shower, when the scrotum is looser. And what of the female users? No, not even the three of them are safe. It has been proven that even female users can contract testicular cancer. How, you ask? Well, by a miracle of God, the female will grow testicles, that will then become "cancerfied" (or "cancer-ific", if you prefer.) The only users who are safe from this horrible plague are IP addresses, as they lack testes and the ability to grow any. Remember kids, no matter how much Dr. Health, Esq. tells you cancer is great, don't believe him. Oh? ... What's that? ... I said something about dying? ... Oh. ... Well, if one of your testicles was three times bigger than the other and your semen was filled with blood, would you not kill yourself? That's right... - -- REGRETTENENBAUMIS DEAD TALK! 18:54, 29 July 2008 (UTC) 7/30 edition hey UU, i'm working on a shortish piece about the whorehouse. it should be done in a few hours, you can check it here, and throw it in if you don't think it will churn up too much drama. will that be enough for this week, or should i try to find material for another article? 14:57, 30 July 2008 (UTC) - Sayeth UU: a million thanks Gerry! I'm racing against time to do anything today - work = massive stress and it's pre-season training for my team this evening. Anyway, I won't have time to do much more on the USP this week, so when you're done, just slam it in - I trust you to do good stuffs. You could either write something else or pop the above in (possibly edited a little for brevity?) Oh, and a biopic of the week would be good, if you can - they don't have to be awesome - look at the ones I wrote! Let THEDUDEMAN know when it's ready, and go for it. Further thanks, and now I must dash! -:03, Jul 30
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:UnSignpost/Press_Room/4
CC-MAIN-2015-11
refinedweb
1,275
83.15
This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project. On 2/19/08, Mike Welsen <mikewelsen@hotmail.com> wrote: > Does anyone know why asignment operators are not allowed for > union class members? The compiler needs to synthesize an assignment operator for the union itself, but since the union's active field is not known, the compiler cannot know which field to copy when copying the union. So the compiler cannot do any better than copying all fields simultaneously, which means using something like memcpy. And so, the C++98 standard makes using type with non-trivial assignment operators illegal as union members. > Other compilers accept this code, but I can't compile the following > code with GCC. Those other compilers are in error. > (error: member 'A C::::::aa' with copy assignment operator > not allowed in union) > > class A { > public: > int a; > A& operator=(const A&){return (*this);} > int AFunc(void){} > }; > > class B{ > public: > int b; > }; > > class C { > public: > C(){} > union { > struct {A aa;}; // builds with warning if level4 is set > B bb; > }; > }; The C++0x standard will add a new feature that allows you to have non-trivial assignment operators for union fields. The catch is that you will have to explictly code the surrounding assignment operator, the compiler cannot figure it out. -- Lawrence Crowl
http://gcc.gnu.org/ml/gcc-help/2008-03/msg00051.html
CC-MAIN-2017-13
refinedweb
224
60.95
tag:blogger.com,1999:blog-45623293669432757832014-10-07T01:07:37.307+01:00Chronological ThoughtA collection of notes and musings on technical issues by David SavageDavid Savage & The Cloud (Part 2)This is the second blog entry in a series documenting the underlying points I made in my recent talk at the <a href="">OSGi Community Event</a> in London. Entitled "OSGi And Private Cloud", the slides are available <a href="">here</a> and the agenda is as follows:<br /><ul><li>Where is Cloud computing today? (<a href="">Part 1</a>)</li><li>Where does OSGi fit in the Cloud architecture?</li><li>What are the challenges of using OSGi in the Cloud?</li><li>What does an OSGi Cloud platform look like?</li></ul>In this section of the talk I look at where OSGi fits into the Cloud architecture. However, as the community event was co-hosted with <a href="">JAX London</a>.<br /><ul></ul><h3 class="Header">OSGi A Quick Review</h3><img align="right" border="0" height="221" src="" width="320" />I've been working with Richard Hall, Karl Pauls and Stuart McCulloch on writing <a href="">OSGi In Action</a> which explains OSGi from first principles to advanced use cases, so if you want to know more that's a good place to look. However, here I'd like to give my elevator pitch for OSGi which would be something like as follows...<br /><br /:<br /><ul><li>Modules - the building blocks from which to create applications</li><li>Life cycle - control when modules are installed or uninstalled and customise their behaviour when they are activated </li><li>Services - minimal coupling between modules</li></ul>You might say that none of these are new ideas, so why is OSGi important? The key is in the standardisation of these fundamental axioms of Java applications. Instead of every software stack having a new and inventive way of wiring classloaders together, booting components, or connecting component A to component B, OSGi provides a minimal flexible specification that allows us to get interoperability between modules and let developers get on with the interesting part of building applications.<br /><h3 class="Header">An Uncomfortable Truth </h3><img align="right" border="0" height="137" src="" width="320" />To see where OSGi fits into the Cloud story it's worth taking a brief segue to consider a point made by <a href="">Kirk Knoernschild</a> at the OSGi community event in February this year. Namely that we are generating more and more code with every passing day:<br /><ul><li>Lines of code double every 7 years</li><li>50% of development time spent understanding code</li><li>90% of software cost is maintenance and evolution</li></ul>By 2017, we'll have written not only double the amount of code written in the past 7 years but more than the total amount of code ever written combined! Object Orientation has helped in encapsulating our code so that changes in private implementation details do not effect consumers. But in fact OO turns out to be just a stop gap and it is reaching the limits of its capabilities. If you refactor public objects or methods you still need to worry about who is consuming these and without modules this can be a hard question to answer.<br /><br /.<br /><br /?<br /><h3 class="Header">Types of Scale</h3><img align="right" border="0" height="320" src="" width="320" /><br />There are three measures of scale that I think are of relevance to this discussion of OSGi and the Cloud:<br /><ul><li> Operational scale - the number of processors, network interfaces, storage options required to perform a function</li><li>Architectural scale - the number and diversity of software components required to make up a system to perform a function</li><li>Administrative scale - the number of configuration options that our architectures and our algorithms generate</li></ul>In fact, I think we've got pretty good patterns by now for dealing with the operational scale. As we increase the number of physical resources at our disposal, this drives the class of software algorithms required to perform a function. To pick a random selection <a href="">Actors</a>, <a href="">CEP</a>, <a href="">DHTs</a> and <a href="">Grid</a> are just some of the useful software patterns for use in the Cloud. However, I think architectural and administrative scale is often less well managed.<br /><br / <a href="">paradox of choice</a>.<br /><br /.<br /> <br /?<br /><br />All this brings me to...<br /><h3 class="Header">OSGi Cloud Benefits </h3><img align="right" border="0" height="320" src="" width="240" />In <a href="">Part 1</a> of this series of blogs I mentioned that the <a href="">Nist</a><i> </i>definition of a cloud includes the statement that: <i>"Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability"</i>, to my mind OSGi has these bases covered.<br /><br /.<br /><br />But why should cloud software have these features? <br /><br /. <br /><br />OK interesting, but you might say that "TechnologyX (pick your favourite) can also provide these features, so really sell me on the OSGi cloud benefits". In which case I propose that there are <a href="">four</a> additional benefits of OSGi with respect to Cloud software which I'll deal with in turn:<br /><br /><b>Dynamic</b>:.<br /><br /><b>Extensible</b>:.<br /><br /><b>Lightweight</b>::<br /><ul><li>if you need to get diagnostics information out of the software, only deploy the diagnostics components for the time that they are needed - for the rest of the time run lean</li><li>if you need to scale up a certain component's processing power, swap an in-memory job queue for a distributed processing queue and when you're done swap it back again. </li></ul><br /><b>Self describing</b>:; <a href="">Nimble</a>, <a href="">OBR</a> and <a href="">P2</a>. This simplifies deployments by allowing software engineers to focus on what they <i>want</i> to deploy instead of what they <i>need</i> to deploy.<br /><br /.<br /><h3 class="Header">OSGi Cloud Services</h3>To conclude this post, assuming I've managed to convince you of the benefits of OSGi in Cloud architectures, here are some ideas for potential cloud OSGi services (definitely non exhaustive): <br /><ul><li>MapReduce services - Hadoop or Bigtable implementations?</li><li>Batch services - Plug and play Grids?</li><li>NoSQL services - Scalable data for the Cloud!</li><li>Communications services - Email, Calendars, IM, Twitter?</li><li>Social networking services - Cross platform widgets?</li><li>Billing services - Making money in the Cloud! </li><li>AJAX/HTML 5.0 services - Pluggable UI architectures? </li></ul>These would enable developers to start building modular, dynamic, scalable applications for the Cloud and are in fact pretty simple to achieve if there's the will power to make it happen.<br /><br /.<br /><br />So all good right? Well there are still of course challenges, so in the next post I'll look at some of these and discuss how to overcome these. In the meantime, I'm very interested in any feedback on the ideas found in this post.<br /><br />Laters<img src="" height="1" width="1" alt=""/>David Savage & The Cloud (Part 1)<a href=""><img align="right" border="0" height="130" src="" style="margin-left: auto; margin-right: auto;" width="400" /></a><br />I recently attended the <a href="">OSGi Community Event</a> where I gave a talk entitled "OSGi And Private Cloud" the slides for which are available <a href="">here</a>. However as has been pointed <a href="">out</a>, if you watch the slide deck they're a little on the zen side, so if you weren't at the event then it's a bit difficult to guess the underlying points I was trying to make.<br /><br />To address this I've decided to create a couple of blog entries that discuss the ideas I was trying to get across. Hopefully this will be of interest to others.<br /><br />In the talk the agenda was as follows:<br /><ul><li>Where is Cloud computing today?</li><li>Where does OSGi fit in the Cloud architecture? (<a href="">Part 2</a>)</li><li>What are the challenges of using OSGi in the Cloud?</li><li>What does an OSGi cloud platform look like?</li></ul>I'll stick to this flow but break these sections up into separate blog entries. So here goes with the first section...<br /><h3 class="Header">Where is Cloud computing today?</h3>Ironically Cloud computing is viewed by many as a pretty nebulous technology so before even describing <i>where</i> Cloud computing is, it's possibly useful to define <i>what</i> Cloud computing is.<br /><ul><li>Wikipedia <a href="">defines</a> a Cloud as: "Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid".</li><li>InfoWorld <a href="">defines</a>".</li><li>NIST <a href="">defines</a> "</li></ul>For me all of these definitions seem pretty similar to Utility computing. Again wikipedia <a href="">defines</a> Utility computing as "[sic] the packaging of computing resources, such as computation, storage and services, as a metered service similar to a traditional public utility (such as electricity, water, natural gas or telephone network)". So what is the boundary between Utility computing and Cloud computing? Others have attempted to define Cloud by what it is <a href="">not</a>. I tend to agree with some of these points but not others.<br /><br />So where does this leave us…?<br /><br />Actually I think the NIST definition I referred to above does a good job of describing what Cloud is as long as you read past the first sentence. For me the summary of this document is that:<br /><br /><i>Cloud is computation that is: on demand; easily accessed from a network; with pooled resources; and rapid elasticity.</i><br /><br />As a final foot note in the NIST document it mentions that:<br /><br /><i>"Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability".</i><br /><br />This last sentence for me is of fundamental importance when considering the relevance OSGi to the Cloud.<br /><br /.<br /><h3 class="Header">Why Cloud?</h3>So as the previous discussion suggests, the reason for using a Cloud model is it gives users just in time:<br /><ul><li>Processing power</li><li>Storage capacity</li><li>Network capacity. </li></ul>Clouds models are great for small, medium and large organisations.<br /><ul><li>Small organisations benefit from the reduced startup costs of Clouds compared with setting up and provisioning home grown infrastructure for web sites, email, accounting software, etc. </li><li>Medium sized organisations benefit from the on demand nature of Clouds - as their business grows so can their infrastructure </li><li>Large organisations benefit from Cloud due to their shared resources - instead of having to maintain silos of computing infrastructure for different departments they can get cost savings via economy of scale.</li></ul>There are a large number of vendors touting Cloud products, including <a href="">Amazon</a>, <a href="">Google</a>, <a href="">Salesforce</a>, <a href="">Rackspace</a>, <a href="">Microsoft</a>, <a href="">IBM</a>, <a href="">VMware</a> and <a href="">Paremus</a>. These products fit into various categories of Cloud, IAAS (Infrastructure as a Service), PAAS (Platform as a Service), SAAS (Software as a Service) and Public or Private Cloud.<br /><h3 class="Header">Cloud Realities</h3><a href=""><img align="right" border="0" height="225" src="" width="400" /></a><br />So Cloud seems pretty utopian right? In fact despite promise of Cloud the realities it delivers are somewhat different.<br /><ul><li>As there are so many vendors, there are also multiple APIs that developers need to code to for simple things they used to do like loading resources </li><li>Depending on the vendor the sort of things you can do in a Cloud are often limited (in Google App Engine you can't create Threads for example) </li><li.</li></ul>Finally, and this is a factor that effects all Clouds, they are <i>not</i> - despite marketing - <a href="">infinite resources</a> (pdf). Contention and latency are real problems in Cloud environments. The the shared nature of Cloud architectures means that SLAs can be severely impacted by seemingly random processing spikes by other tenants. Cloud providers employ many different tactics to minimise these problems but running an application in the Cloud and running it on dedicated hardware is not a seamless transition.<br /><h3 class="Header">Why Private Cloud?</h3:<br /><ul><li>Data ownership risks – A bank for example is often extremely reluctant to host private customer details on infrastructure they don't own. This can be for legal/regulatory reasons or business intelligence reasons</li><li>Data inertia – I've heard one horror story at a previous <a href="">Cloud Camp</a></li><li</li><li>SLA – The contention and latency issues of Clouds can mean that for those businesses that are are in a competitive compute-intensive business then any downtime or latency outside of your control can have a major effect on your bottom line.</li></ul>Private Cloud implies all of the on-demand, dynamic, network accessible goodness, but in a controlled environment where the business has direct control of the cloud tentants, so can better control their SLA. A bit like owning a well, or growing your own food, there are costs but also benefits.<br /><br /...<br /><h3 class="Header">How Do We Get Here?</h3><a href=""><img align="right" border="0" height="277" src="" style="margin-left: auto; margin-right: auto;" width="400" /></a>This is M51a “The whirlpool galaxy” discovered by Charles Messier in 1774 (and its companion galaxy NGC 5195). <br /><br />I came at computing from the physics angle and when I think of computer software/architecture I tend to think in terms of patterns. A galaxy is just a cloud of gas after all - but there is structure, dynamicity and mechanics that describe their overall behaviour! <br /><ul><li</li><li>Clouds are dynamic, resources come and go, their can be gaps in communication caused by latency, their can even be large scale events like data centre collapse. Software that is deployed on them must be able to cope with these dynamics</li><li.</li></ul><br /.<br /><br />In the mean time, I'm very interested in any comments or feedback on any of the ideas discussed here.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage Tatooine<p>So in my last couple of posts I've been showing the power of Nimble. You will have noticed that it is primarily a console environment. As such you may be wondering how you can provide your own commands to execute in the <a href="">Nimble</a> shell - <a href="">Posh</a> (Paremus OSGi Shell).</p><p>Posh is an implementation of the command line interface specified in <a href="">RFC-147</a>.<br /></p><p <a href="">Karaf</a> container and the Nimble container from Paremus.</p><p>This gives you some background, so now the standard thing for me to do would be to write a trivial hello world application. But that's no fun, so instead of conforming to the norm I thought it would be more interesting to port the <a href="">Starwars Asciimation</a> work to run in OSGi as an RFC 147 command line interface.</p><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img 0="" 10px="" src="" alt="" id="Gogo Starwars" border="0" /></a><br /><br /><p>Yep this is very probably the geekiest post you will ever see... :)</p><p.</p><p>The first thing we need to do to define our cli is define a class that implements the core functionality as shown below:</p><pre>package org.chronologicalthought;<br /><br />import java.io.BufferedInputStream;<br />import java.io.IOException;<br />import java.io.InputStream;<br />import java.io.InputStreamReader;<br />import java.io.PrintStream;<br />import java.io.Reader;<br />import java.net.URL;<br /><br />public class Starwars {<br /> public void starwars() throws IOException, InterruptedException {<br /> play(67);<br /> }<br /><br /> public void starwars(int frameLength) throws IOException, InterruptedException {<br /> URL res = Starwars.class.getResource("/starwars.txt");<br /> if (res == null)<br /> throw new IllegalStateException("Missing resource");<br /> InputStream in = res.openStream();<br /> try {<br /> InputStreamReader reader = new InputStreamReader(new BufferedInputStream(in));<br /> render(reader, System.out, frameLength);<br /> } finally {<br /> in.close();<br /> }<br /> }<br /><br /> private void render(Reader reader, PrintStream out, int frameLength) {<br /> // ...<br /> }<br />}</pre><p>Here the command provides two methods, play and play(int) and prints the individual frames from the "starwars.txt" file embedded in our bundle to System.out.</p><p>Wait a minute you might be thinking. Where's the API to the CLI? Well this is one of the neat things about RFC 147 you don't need to write your code to <em>any</em>.</p><p>The next step is to define an activator that publishes our cli class to the OSGi bundle context.<br /></p><pre>package org.chronologicalthought;<br /><br />import java.util.Hashtable;<br />import org.osgi.service.command.CommandProcessor;<br /><br />import org.osgi.framework.BundleActivator;<br />import org.osgi.framework.BundleContext;<br /><br />public class Activator implements BundleActivator {<br /><br /> public void start(BundleContext ctx) throws Exception {<br /> Hashtable props = new Hashtable();<br /> props.put(CommandProcessor.COMMAND_SCOPE, "ct");<br /> props.put(CommandProcessor.COMMAND_FUNCTION, new String[] { "starwars" });<br /> ctx.registerService(Starwars.class.getName(), new Starwars(), props);<br /> }<br /><br /> public void stop(BundleContext ctx) throws Exception {<br /> }<br />}</pre><p>This activator publishes the Starwars class with two attributes:</p><ul><li>CommandProcessor.COMMAND_SCOPE - a unique namespace for our command</li><li>CommandProcessor.COMMAND_FUNCTION - the names of the methods to expose as commands in the cli</li></ul><p>The code is available from <a href="">here</a> for those who want to take a look around:</p><p <a href="">here</a>. So finally let the show commence:<pre>$ svn co<br />$ cd starwars<br />$ ant<br />$ posh<br />Paremus Nimble 30-day license, expires Wed Dec 30 23:59:59 GMT 2009.<br />________________________________________<br />Welcome to Paremus Nimble!<br />Type 'help' for help.<br />[feynman.local.0]% source<br />[feynman.local.0]% installAndStart file:build/lib/org.chronologicalthought.starwars.jar<br />[feynman.local.0]% starwars<br /><br /><br /><br /><br /><br /><br /><br /><br /> presents<br /><br /><br /><br /><br /></pre><p:</p><pre>% starwars 20</pre><p>To set the frame length as 20 milliseconds.</p><p>Enjoy the show.</p><p>Laters.</p><img src="" height="1" width="1" alt=""/>David Savage for my next trickJust for fun and to demonstrate the power of the Posh (sh)ell environment I decided to knock together the following trivial script to do a "traditional" OSGi bundle file install from a directory:<br /><br /><pre>// create a temporary array for storing ids<br />array = new java.util.ArrayList;<br /><br />// iterate over the files passed <br />// in as arguement 1 to this script<br />each (glob $1/*) {<br /><br /> // use the BundleContext.installBundle <br /> // method to install each bundle<br /> id=osgi:installBundle $it;<br /><br /> // store the bundle id for start later<br /> $array add $id;<br />};<br /><br />// iterate over our installed bundles<br />each ($array) {<br /> // use the BundleContext.start method<br /> //to start it<br /> osgi:start $it;<br />};</pre><br />To try this out for yourself or to find out more about Nimble you look <a href="">here</a> once installed you can run the above script using the following command:<br /><br />posh -k <your bundles dir><br /><br />Where you should replace <your bundles dir> with a path to a directory on your local file system that contains bundles.<br /><br />Hmmm what to blog next...ponders...<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage OSGiSo I just sent a rather cryptic <a href="">twitter</a> message with the instructions:<br /><br /><span style="font-family: courier new;">posh -kc "repos -l springdm;add org.springframework.osgi.samples.simplewebapp@active"</span><br /><br />I figure it's probably worth a short note to explain what this is doing given the narrowband aspect of twitter communications.<br /><br />This command is running an instance of the posh (sh)ell which ships with <a href="">Nimble</a>. There are two switch parameters parsed to the shell:<br /><br /><span style="font-family:courier new;">-c</span>: Tells posh to execute the command passed in from the unix shell in the posh (sh)ell environment<br /><span style="font-family:courier new;">-k</span>: Tells posh to remain running after the command has completed and open a tty session for user input<br /><br />Now we come to the actual commands:<br /><br /><span style="font-family:courier new;">repos -l springdm</span>: tells posh to load the spring dm repository index into the nimble resolver<br /><br /><span style="font-family:courier new;">add org.springframework.osgi.samples.simplewebapp@active</span>: tells nimble to resolve all dependencies for the spring simplewebapp from it's configured repositories.<br /><br />The interesting thing about nimble resolution is that it doesn't just figure out the bundles that need to be installed. It also figures out what <em>state</em> these bundles should be in. If you look at the bundles in the nimble container using the command<span style="font-family:monospace;"> </span><span style="font-family:courier new;">lsb</span> you will see that not only are all the bundles installed but certain key bundles have also been activated:<br /><br /><pre>lsb<br />*nimble/com.paremus.util.cmds-1.0.4.jar 00:00 59Kb<br />0 ACTIVE org.eclipse.osgi:3.5.1.R35x_v20090827<br />1 ACTIVE com.paremus.posh.runtime:1.0.4<br />2 ACTIVE com.paremus.posh.shell:1.0.4<br />3 RESOLVED com.paremus.util.types:1.0.4<br />4 ACTIVE com.paremus.nimble.core:1.0.4<br />5 ACTIVE com.paremus.nimble.repos:1.0.4<br />6 ACTIVE com.paremus.nimble.cli:1.0.4<br />7 RESOLVED javax.servlet:2.5.0.v200806031605<br />8 RESOLVED com.springsource.slf4j.api:1.5.6<br />9 RESOLVED com.springsource.slf4j.nop:1.5.6<br />10 RESOLVED com.springsource.net.sf.cglib:2.1.3<br />11 RESOLVED com.springsource.edu.emory.mathcs.backport:3.1.0<br />12 RESOLVED org.springframework.osgi.log4j.osgi:1.2.15.SNAPSHOT<br />13 RESOLVED com.springsource.org.aopalliance:1.0.0<br />14 RESOLVED org.springframework.osgi.jsp-api.osgi:2.0.0.SNAPSHOT<br />15 RESOLVED com.springsource.slf4j.org.apache.commons.logging:1.5.6<br />16 RESOLVED osgi.cmpn:4.2.0.200908310645<br />17 RESOLVED org.mortbay.jetty.util:6.1.9<br />18 RESOLVED org.springframework.osgi.jstl.osgi:1.1.2.SNAPSHOT<br />19 RESOLVED org.springframework.core:2.5.6.A<br />20 RESOLVED org.springframework.osgi.commons-el.osgi:1.0.0.SNAPSHOT<br />21 RESOLVED org.mortbay.jetty.server:6.1.9<br />22 ACTIVE org.springframework.osgi.samples.simplewebapp:0.0.0<br />23 RESOLVED org.springframework.beans:2.5.6.A<br />24 RESOLVED org.springframework.osgi.io:1.2.0<br />25 RESOLVED org.springframework.osgi.jasper.osgi:5.5.23.SNAPSHOT<br />26 RESOLVED org.springframework.aop:2.5.6.A<br />27 RESOLVED org.springframework.osgi.catalina.osgi:5.5.23.SNAPSHOT<br />28 RESOLVED org.springframework.context:2.5.6.A<br />29 ACTIVE org.springframework.osgi.catalina.start.osgi:1.0.0<br />30 RESOLVED org.springframework.osgi.core:1.2.0<br />31 RESOLVED org.springframework.web:2.5.6.A<br />32 RESOLVED org.springframework.osgi.web:1.2.0<br />33 ACTIVE org.springframework.osgi.web.extender:1.2.0<br />34 ACTIVE com.paremus.posh.readline:1.0.4<br />35 ACTIVE com.paremus.util.cmds:1.0.4</pre><br />This listing also demonstates another key feature of nimble. Typing<span style="font-family:monospace;"> </span><span style="font-family:courier new;">lsb</span> resulted in the following log line:<br /><br /><pre>*nimble/com.paremus.util.cmds-1.0.4.jar 00:00 59Kb</pre><br />This demonstrates that the nimble container resolved the<span style="font-family:monospace;"> </span><span style="font-family:courier new;">lsb</span> command from its repository index and installed it on the fly. In fact if you look at the Nimble download it is only 55K in size. All of the extra functionality is automatically downloaded based on information provided via the nimble index files and traversing package and service level dependencies!<br /><br />To complete this blog post you can browse the simple web app running from nimble by opening:<br /><br /><pre></pre><br />Nimble is available for download <a href="">here</a>.<img src="" height="1" width="1" alt=""/>David Savage Dev Con 2009 & OSGi Tooling Summit Roundup.<br /><br /.<br /><br />Even better for me people really seemed to get what we (<a href="">Paremus</a>) are about. Last year we were the "RMI guys". This year people we talked to seemed to get genuinely excited about what our product is a really about: a flexible, scalable solution to provisioning and managing dynamic distributed OSGi based applications in enterprise environments.<br /><br /.<br /><br />I think good tooling solutions that work right the way though the stack are crucial to help new developers though the pitfalls of the new classloader space. Unfortunately, tooling support for new developers is pretty disjointed.<br /><br />Hence, the next part of my post...<br /><br / <a href="">Sigil</a>. Prior to OSGi Dev Con Paremus chose to licence Sigil under the Apache licence, as we recognise that tooling is an area where we need support from the community in order to help the community as a whole.<br /><br />On the Friday, after the end of the conference, I and a number of other representatives with interests in the area of development tooling met at an OSGi Tooling Summit hosted by Yan Pujante at <a href="">LinkedIn</a>'s Mountain View offices. The group was pretty large and diverse (as you can see <a href="">here</a>)..<br /><br /.<br /><br />I had a number of really encouraging conversations with Chris Aniszczyk and Peter Kriens who work on <a href="">PDE</a> and <a href="">BND</a> respectively, both of which have a lot of cross over with the work I've been doing on Sigil.<br /><br />Chris has just <a href="">twittered</a>.<br /><br />I guess in a perfect world I'd like to be able to support Maven, Netbeans and IntelliJ users as well. Hopefully I'll be able to update you in the next couple of months on progress in this area.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage Behaviours<a href=""><img src="" alt="I'm speaking at EclipseCon 2009" border="0" height="100" width="100" /></a><br /><br / <a href="">here</a>.<br /><br />I guess <em>my</em> main focus for calling the BOF is that I'm very interested in talking to other OSGi developers to see what it is we on the tooling side can do to make our collective jobs easier. <br /><br />What is it about OSGi development that really frustrates you - and what can tools do to make it easier?<br /><br /.<br /><br />Hope to see you there.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage toolingThere's a really interesting conversation going on at TSS about <a href="">OSGi and future directions for Enterprise Java</a>.<br /><br />I've posted a reply which I thought it was worth reposting here:<br /><br />I think there are two issues with [the approach of repackaging existing modules as OSGi bundles and simply importing/exporting all packages] which really cause headaches going forward; Module vs API dependencies and complex "Uses" graphs.<br /><br />Firstly module vs api; in most dependency tools such as Maven and Ivy the developer specifies dependencies at the module layer - i.e.<br /><br /><dependency org="org.apache.log4j" name="org.apache.log4j" rev="1.2.15" /><br /><br />But then spring have added an OSGi version of the module which has a different module id.<br /><br /><dependency org="org.apache.log4j" name="com.springsource.org.apache.log4j" rev="1.2.15" /><br /><br /.<br /><br /.<br /><br /).<br /><br />Secondly the major thorn in the approach of naively exporting all packages in a module is the complex "uses" information it generates. "Uses" is a flag provided by OSGi on exports to assert that the class space is coherent across multiple bundle import/export hops.<br /><br /.<br /><br />I've referred to this as placing barbed wire around sand castles (in most cases). If the modules were more sensibly designed i.e. only exporting the "really" public code then this problem is much reduced.<br /><br /.<br /><br />[1]<br />[2]<img src="" height="1" width="1" alt=""/>David Savage up for airOk so it's been a while since I last post anything here, mainly because I've been working on a number of different big projects for the last couple of months and this left no time for blog posting.<br /><br />So having come up for air for a few hours at least I thought it'd be a good idea to blog about them see if I can drum up some interest ;)<br /><br />The major addition to my task list over the past few months has been the <a href="">Sigil project</a>. This is a set of tools to help developers build OSGi bundles. It started off as an eclipse plugin but about two or three months ago my team mate came along with some really great code to integrate it with Ivy.<br /><br />I believe Sigil is the first tool out there to unify OSGi development in the IDE and server side build in a common configuration file (this being a good thing as it saves the messy job of keeping other systems in sync).<br /><br />The IDE supports tools such as code complete, quick fixes and tools to analyse OSGi dependencies. I've also built in the idea of repositories (currently either file system or <a href="">OBR</a> but extensible via plugins) which allow the developer to download dependencies on the fly whilst developing by simply adding an import-package or require bundle statement to their code. Oh and the same repositories can be used in eclipse and ivy :)<br /><br />The other big piece of code I've been working on is of course <a href="">Newton</a>. There are no big feature announcements for this release as we've been focussing on making the current platform more and more robust. But we've just made available the <a href="">1.3.1 release</a> :)<br /><br />Anyway that seems like a good amount of detail for the time being. I'll try to blog some more about some of this stuff soon...<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage this an application which I see before me?This post has been triggered by two <a href="">interesting</a> <a href="">posts</a> on the topic of what it is to be an application in an OSGi environment. This is something I felt I just had to get stuck in with as its exactly what we've been working on in the <a href="">Newton</a> project.<br /><br />The way we've tackled this is very similar to the approach suggested by Mirko, except instead of using Spring-DM as the top level element in the model we've picked <a href="">SCA</a>.<br /><br />SCA is a relatively new specification but it gives a vendor neutral way of describing the service dependencies and architecture of an application running in an enterprise environment.<br /><br /.<br /><br />In Newton we associate a composite with a top level (or root) bundle. This bundle then provides the class space for the composite to instantiate it's implementation, services and references. Importantly the bundle does not have to contain <span style="font-style: italic;">all</span> of the classes that it needs to function but can use OSGi tools such as Import-Package to achieve modularity at the deployment level.<br /><br />When an SCA composite is installed in a Newton runtime we go through a series of steps:<br /><ol><li>Resolve the root bundle that supplies the class space for the composite. If this is not the first time the root bundle has been resolved we increment an internal counter<br /></li><li>Resolve and optionally download the bundle dependencies required to satisfy the root bundle against a runtime repository (this includes ensuring that we reuse existing bundles within the runtime - if they were installed for other composites)</li><li>Build a runtime model around the SCA document that controls the lifecycle of the component as references come and go</li><li>Finally when all required references are satisfied (a dynamic process) we publish the services to be consumed by other components in the enterprise.</li></ol>When an SCA composite is uninstalled we go through the reverse process:<br /><ol><li>Unpublish the services and release any references.</li><li>Shut down the implementation and discard our runtime model.<br /></li><li>The bundle root counter is decremented, if the bundle root counter reaches zero then it is no longer required in the runtime and is marked as garbage.<br /></li><li>Finally garbage collect all bundles that are no longer in use, so clearing down the environment.</li></ol>This pattern then forms the building blocks of our distributed provisioning framework that is able to deploy instances of SCA composites across what we term a "fabric" of newton runtime instances.<br /><br /.<br /><br /:<br /><ul><li>how implementations and interfaces are connected<br /></li><li>how remote services should interact via bindings</li><li>how they should scale</li><li>where they should install</li></ul>Hope that's been of interest,<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage New HopeA long time ago in a galaxy far far away...<br /><br /.<br /><br /.<br /><br />Inevitably these powers drew the attention of those still living within the empire. Initially some sought to discredit the Jedi powers, either through misunderstanding or fear. But soon others became envious and wanted these powers for themselves.<br /><br /.<br /><br />Those in the Jedi council found themselves torn between the short term promises of wealth offered by the empire or sticking to their ideals and holding out for the long term riches of a truely flexible Java virtual machine architecture.<br /><br />It is at this point in the story that we find our selves. Peter Kriens (Obi Wan?) has recently <a href="">blogged</a> about the choices facing the Jedi alliance and argues for the purist ideals to be upheld.<br /><br />Myself I find myself acting as a trader (or possibly a smuggler - gonna argue for Han but you make your own judgements...) between these two worlds.<br /><br />As a developer and architect of <a href="">Infiniflow</a> I have been directly involved with building a distributed computing infrastructure that seeks to embrace ways of the Force as championed by the Jedi alliance for a single JVM but across the enterprise.<br /><br /.<br /><br />Whether you believe a word of this, having walked the boundary between the Jedi world and that of the Empire I am acutely aware of the problems associated with integrating tools built by these two communities.<br /><br /.<br /><br />When it is impossible to integrate a legacy pattern I'd argue that this is a point when we have to admit that Gödel was right - it is not always possible to please all of the people all of the time (I <a href="">paraphrase</a> but you get the point). You can always delegate legacy cases to a separate jvm and communicate remotely to the old world.<br /><br />If the Jedi council compromise their core ideals for ill conceived or temporary solutions they risk sending out a mixed and confusing message to those who are new to this technology.<br /><br /.<br /><br />Once stepping onto the path to the dark side it is very difficult to turn back and ultimately leads to ruin. (Or cool lightning powers - you decide)<br /><br />I have to give credit to the fantastic blog posts of <a href="">Hal Hildebrand</a> for the Star Wars theme to this blog entry, whether this will become a common theme for my own posts I'm unsure but it was certainly fun to write.<br /><br />Laters,<img src="" height="1" width="1" alt=""/>David Savage Be(*) Or Not To Be(*) That Is The Question(*) Included in an API bundle.<br /><br />There's been a lively <a href="">debate</a> on the OSGi mailing list over the past couple of weeks surrounding the issue of whether an API should be packaged with it's implementation in one bundle or whether it should be split up into two bundles (one for API and one for implementation).<br /><br /.<br /><br /:<br /><ul><li>installation simplicity - minimizing the number of bundles needed to run an OSGi application</li><li>runtime simplicity - minimizing the connectivity between bundles in a running OSGi application.</li><li>architectural simplicity - minimizing duplicated packages between bundles used to build an OSGi application</li></ul.<br /><br />I'll try to give a quick overview.<br /><br /.<br /><br /.<br /><br />Another important detail to be considered is that in OSGi it is possible for many bundles to export the same API package and only one will be picked by the runtime to actually provide the classes.<br /><br /.<br /><br />My own advice would be to start by assuming that API and implementation are packaged in separate bundles. The reasoning behind this is based on the following criteria:<br /><ul><li>In general an implementation is likely to depend on more packages than it's API<br /></li><li>You can always collapse back to one bundle later if you use import-package vs require-bundle</li><li>If you use a provisioning mechanism such as <a href="">Newton</a> or <a href="">P2</a> (when it's ready) downloading two bundles vs one is handled automagically</li></ul>The benefits of splitting API and implementation are the following:<br /><ul><li>If you are considering making your application distributed or want it to run in a constrained environment you can install the API without having to resolve the implementation dependencies (possibly a <span style="font-style: italic;">big deal</span> in a client/server architecture)<br /></li><li>If you want to restart or reinstall the implementation bundle then this doesn't automatically mean the restart of all client bundles that are using the API from that bundle<br /></li><li.</li></ul>If you start by assuming the API and implementation are separate then you can use the following logic to assess whether you can condense them back to one bundle for the purposes of your architecture:<br /><ol><li>Start by designing your API to depend on as little as possible.</li><li>Make your implementation depend on API packages and any other dependencies it needs to function.<br /></li><li>If after doing this the implementation depends only on API consider whether the implementation is ever likely to get more complicated.</li><li>If it isn't then you can probably collapse back to one bundle.</li></ol>Of course this can always be done prior to building <i class="moz-txt-slash"><span class="moz-txt-tag"></span>any<span class="moz-txt-tag"></span></i> bundles if you are good at modelling in you head or on paper etc.<br /><br />Hopefully that's helped some people understand the issues.<br /><br />Laters<img src="" height="1" width="1" alt=""/>David Savage Versioning Is Complex Or Am I Imagining It?So it seems everyone and his dog are talking about versioning at the moment. Specifically the proposed version numbering systems used in <a href="">OSGi</a> and <a href="">JSR 277</a> and their ability to coexist (or not).<br /><br />For my own part I personally favor the OSGi scheme because to my mind it is simpler and better defined.<br /><br /.<br /><br />When a module consumer wishes to specify a version they wish to use they specify either a single version number.<br /><br />1.0.0<br /><br />or a version range<br /><br />[1.0.0,2.0.0)<br /><br />The first version means any version greater or equal to 1.0.0. The second means any version between 1.0.0 and any version prior to 2.0.0.<br /><br />However anyone who has worked with software development for any non trivial amount of time will know there are still inherent problems in versioning schemes in that they require developers to correctly markup when changes to their code effect the compatibility of that code.<br /><br />This is a <span style="font-style: italic;">big deal</span> as depending on the consumer of the code even changes as innocuous as a java doc comment update can effect the version number. There's a really good discussion of the issues surrounding this <a href="">here</a>.<br /><br />As we all know developers are mostly human and so; make mistakes, are lazy or are just plain forgetful so it is inevitable that version numbers will sometimes not be updated when they should have been.<br /><br />After release these badly versioned modules (poison pills) will sit around in software repositories for potentially a very long time causing instabilities in the systems that use them.<br /><br />Stanly Ho's <a href="">suggestion</a> for combating this issue is to allow consumers of modules to specify complex lists which select certain version ranges but exclude certain known bad versions.<br /><br /.<br /><br /.<br /><br />This is a good thing as if it is easier for developers to handle version numbering there should be fewer bad versions.<br /><br /.<br /><br />I'll prefix the rest of this blog entry by saying my idea is a little crazy so please bare with me, pat me on the head and say there, there back to the grind stone you'll get over it :)<br /><br />So enough pontificating, <span style="font-weight: bold;">the idea</span>:<br /><br />It occurred to me that software versions should really carry an imaginary coefficient (i.e. square root of minus one).<br /><br />[Sound of various blog readers falling off their seats]<br /><br />...<br /><br />What the hell does that mean I here you ask.<br /><br />I said it was crazy and I'm only half proposing it as a real (chuckle) solution. However to my mind it seems more natural to label versions intended as release candidates or as development code as having an imaginary coefficient.<br /><br /.<br /><br />Ok so just because it has a nice analogy doesn't make it valid, how does this help in real world software development?<br /><br /.<br /><br />Imagine a case where software producer A (providing module A) is building release candidates and software consumer B is building module B that depends on module A:<br /><br />ModuleA:1.0.0_RC1<br />ModuleA:1.0.0_RC2<br />ModuleA:1.0.0_RC3<br />etc.<br /><br />ModuleB:import ModuleA:1.0.0<br /><br />In the current scheme there is no way to distinguish release candidates from final code and incremental patches so when producer A builds his release candidate consumer B sees each release candidate as actually being more advanced than the final 1.0.0 release.<br /><br />This is clearly wrong.<br /><br /.<br /><br / <span style="font-style: italic;">after</span> we have decided to make 0.9 the non backwards compatible internal release.<br /><br />Instead I propose that when producer A starts work on a set of changes to module A he increments the appropriate version element (depending on the scale of the changes he/she is making) early and adds an imaginary coefficient to mark it in development.<br /><br />Therefore from the previous example we have<br /><br />ModuleA:1.0.0_3i (RC1)<br />ModuleA:1.0.0_2i (RC2)<br />ModuleA:1.0.0_1i (RC3)<br /><br />As an external consumer of module A we are then able to use [1.0.0,2.0.0) to mean all compatible increments. An internal consumer prior to release can say [1.0.0_2i,2.0.0)<br />to mean all releases of module A after RC2. Importantly this will continue to work after release with no need to update existing imports.<br /><br /.<br /><br />We <span style="font-style: italic;">could</span> come up with a scheme where by the major.minor.micro elements of the imaginary coefficient denote degrees of confidence as to how closely the code matches the proposed design - i.e. major = zero -> developer tested, minor = zero -> system tested, micro = zero -> beta tested etc.<br /><br />The notion of complex version numbers applies equally to the JSR 277 four number version scheme which to my thinking is a completely pointless addition to a spec which does nothing to address the actual problem and breaks lots of existing code that was previously working fine.<br /><br />I'd be very happy for someone to come along and state why imaginary version numbers are not needed as in general I prefer to reuse existing tools where possible and ideally the simplest tool that does the job.<br /><br />So if nothing else this is a vote for OSGi versioning but with a couple of notes on how we <span style="font-style: italic;">may</span> be able to improve it.<br /><br />Laters<br /><br />Update: 09/06/2008<br />Apologies I linked to the second part of the eclipse article on version updates by mistake the correct link is <a href="">here</a>.<img src="" height="1" width="1" alt=""/>David Savage JSR To Far?So there's a lot of conversation going on around the both <a href="">the</a> <a href="">politics</a> and the <a href="">technological</a> <a href="">issues</a> surrounding JSR 277. Personally I think the spec is doomed if it doesn't start working much more closely with the OSGi community - and here's why.<br /><br /. <br /><br />If JSR 277 makes it into the Java 7 release then it seems entirely plausible that the major vendors <span style="font-style:italic;">could</span> choose not to certify their products on Java 7 (especially if the JSR gets in the way of their existing investment in OSGi).<br /><br />Where would this leave Sun if the likes of Websphere, Weblogic, Spring, etc, etc are not certified to run in an enterprise environment? Also where does it leave Java?<br /><br /. <br /><br />Please guys, set egos aside and come to a sensible decision that allows us all to get on with our day jobs.<img src="" height="1" width="1" alt=""/>David Savage
http://feeds.feedburner.com/ChronologicalThought
CC-MAIN-2017-13
refinedweb
8,060
51.68
Every Java object inherits a set of base methods from java.lang.Object that every client can use: Each of these methods has a sensible default behavior that can be overridden in the subclasses (except for final methods, marked above with F). This article discusses overriding the equals() and hashCode() methods for data objects. So, what do the equals() and hashCode() methods do? The purpose of the equals() method is to determine whether the argument is equal to the current instance. This method is used in virtually all of the java.util collections classes, and many other low-level libraries (such as RMI (Remote Method Invocation), JDBC (Java Database Connectivity), etc.) implicitly depend on the correct behavior. The method should return true if the two objects can be considered equal and false otherwise. Of course, what data is considered equal is up to each individual class to define. Since computing an object's equality is a time-consuming task, Java also provides a quick way of determining if an object is equal or not, using hashCode(). This returns a small number based on the object's internal datastructure; if two objects have different hash codes, then they cannot be equal to each other. (Think of it like searching for two words in a dictionary; if they both begin with "A" then they may be equal; however, if one begins with "A" and the other begins with "B" then they cannot be equal.) The purpose of computing a hash code is that the hash should be quicker to calculate and compare than computing full object equality. Datastructures such as the HashMap implicitly use the hash code to avoid computing equality of objects where possible. One of the reasons why a HashMap looks up data faster than a List is because the list has to search the entire datastructure for a match, whereas the HashMap only searches those that have the same hash value. Importantly, it is an error for a class to have an equals() method without overriding the default hashCode() method. In an inheritance hierarchy, only the top class needs to provide a hashCode() method. This is discussed further below. Implementing equals() The equals() method's signature must be: public boolean equals(Object) Note: Regardless of which class contains the equals() method, the signature must always declare an Object argument type. Since Java's libraries look for one with an Object argument, if the signature is not correct, then the java.lang.Object method will be called instead, leading to incorrect behavior. The equals() method's Javadoc declares that it must be: - Reflexive - An object must always be equal to itself; i.e., a.equals(a) - Symmetric - If two objects are equal, then they should be equal in both directions; i.e., if a.equals(b), then b.equals(a) - Transitive - If an object is equal to two others, then they must both be equal; i.e., if a.equals(b)and b.equals(c), then a.equals(c) - Non-null - An object can never be equal to null; i.e., a.equals(null)is always false Fortunately, it is fairly easy to write a method that has this behavior. The equals method should compare: - If the argument is this; if so, return true(reflexivity) - If the argument is null; if so, return false(non-null) - If the argument is of a different type; if so, return false(symmetry) - Non- staticand non- transientfields are equal, or both nullin the case of reference types (symmetry, transitivity) The reason why static fields are not compared is because they are the same for all class instances and thus clearly identical in an instance-based comparison. transient fields are not compared because the transient keyword's purpose is to prevent fields from being written during serialization; and these fields should therefore not play a part in equality testing. Otherwise, if an object is serialized and then de-serialized, then it will not be equal to itself. Let's consider a simple point in 2D space to see how a simple equals() method looks; the comparison is shown graphically in Figure 1: public class Point { private static double version = 1.0; private transient double distance; private int x, y; public boolean equals(Object other) { if (other == this) return true; if (other == null) return false; if (getClass() != other.getClass()) return false; Point point = (Point)other; return (x == point.x && y == point.y); } } The code above shows a simple example of an equals() method. Note that only the two non- static and non- transient fields ( x and y) are being compared; the others ( distance and version) are not relevant to instance-based comparison. Note: In this case, the getClass() method is being used to determine whether the class is the correct type; this is the more correct choice than instanceof, as discussed below. To compare whether two instances are of the same type, their getClass() method is invoked. Each instance has a single Class instance associated with it; classes of the same instance share exactly the same Class instance. In the VM, there is only a single Class instance for each class name (visible within a single ClassLoader, anyway). As a result, these Class instances can be compared with the identity test == instead of having to use an equals() on the Class instances. Comparing reference types So what happens when a reference type is declared within an object? The answer is that their equals() methods are used to compare the references directly; the only extra work involved is determining if the references are both null. The logic is: - If the reference is nullin this, then it must be nullin the other - If the reference is non- nullin this, then it must be equals()in the other Here's an example of a Person class that performs equality checking on two reference types, name and birth: public class Person { private String name; private Date birth; public boolean equals(Object other) { if (other == this) return true; if (other == null) return false; if (getClass() != other.getClass()) return false; Person person = (Person)other; return ( (name == person.name || (name != null && name.equals(person.name))) && (birth == person.birth || (birth != null && birth.equals(person.birth))) ); } } In this case, the identity check (name == person.name) captures whether both references are null. It also removes the equality test when the two fields are identical, thus not requiring a recursive call to equals() in the other instance. Note: It is possible to ignore the test for null in the second part if you know the value can never be null; i.e., it is assigned non- null in the constructor. However, it is better to be safe than have numerous NullPointerExceptions coming out of datastructures because your equals() method does not handle null data correctly. A similar implementation holds for other reference types, such as arrays. However, since arrays lack an equals() implementation, you either have to write the for loop manually to test for element equality or use the Arrays class to perform the equality check (see Resources for link to the Javadoc). Implementing hashCode() The hash code allows collections such as HashMap to organize and access data quicker than searching through the entire datastructure. It does this by partitioning the data into different buckets, then searching through the single bucket corresponding to the hash code required. As a result, if an object's hash code differs, it won't be searched; just like you wouldn't expect the word "Banana" to appear in a dictionary's "A" section. Figure 2 shows a simplified example of how a hash table is structured, using initial letters as the hash: The hash code is just an int calculated from the instance data. Importantly, if two instances are considered equal by equals(), then they must have the same hash code. As a consequence, the hash code can only be computed on those fields compared in the equals() method. It does not have to use all of the equality fields: public class Point { private int x, y; public boolean equals(Object other) { ... Point point = (Point)other; return (x == point.x && y == point.y); } public int hashCode() { return x; } } The code above is a correct (if not optimal) implementation of hashCode(), since it only relies on fields compared by the equals() method. In this case, all points that have the same x value have the same hash code and are deposited into the same bucket for hash comparisons. Thus, when searching for a point in a Map, only the Points with the same x value will be compared. Of course, it is desirable for hash codes to be distributed, which means that where possible, two different objects should have different hash codes. Additionally, if two objects are "close" to each other, they should have a very different hash code. We can improve on our hash function by using other variables in the hash computation: public class Point { private int x, y; public boolean equals(Object other) { ... } public int hashCode() { return x + y; } } Now, our hash function only returns the same for points that lie on a diagonal line. However, this is still not very distributed, because most likely, it may occur in some applications. We can improve the function by using some of the following options: - Use multiplication instead of addition - Use bitwise xor to combine values (the ^operator) instead of addition - Multiply integral values with a prime number to distribute the values We can now add a hashCode() method to the Point class: public class Point { private int x, y; public boolean equals(Object other) { ... } public int hashCode() { return x*31 ^ y*37; } } Of course, the hash code's computation should be quick, which may often negate multiplication's use. Using hashCode() with reference types If the class uses a reference type, then the hashCode() method can delegate the work to the enclosed reference type. However, care must be taken when the reference may be null, because as with equals(), it would be undesirable to generate NullPointerExceptions. In the case of the reference type being null, a fixed number can return. (The number should be non-zero and non-negative, since these values may have special meanings in some hash map implementations.) Although the method can be implemented with multiple if blocks, it is more compact using the ternary if operator: public class Person { private String name; private Date birth; public boolean hashCode() { return (name == null ? 17 : name.hashCode()) ^ (birth == null ? 31 : name.hashCode()); } } Default behavior of equals() and hashCode() The default behavior for these two methods gives answers that work only for simple cases. The equals() method returns true only when it is being compared to itself (i.e., the identity check). The hashCode() method returns an int based on a unique instance hash (such as the object's location in memory and may be calculated differently for different VM implementations). Because the default hashCode() gives an answer based on the instance's identity, it is incorrect to implement equals() without implementing a corresponding hashCode() method. Otherwise, two objects may possibly be equal, but have different hash codes—a violation of the equality contract, which will manifest itself in a number of odd ways. It is much better, if a suitable hash code cannot be calculated, to return a constant value (such as 7) rather than use the default implementation. Of course, using a constant value will degrade the performance of a Map into that of a List, so even a simple implementation that returns the value of an integer field will prove beneficial. Advanced strategies The computation of both equals() and hashCode() must be as quick as possible, since they will be called repeatedly on objects. In a List with 1,000 elements, it is likely that 10,000 comparisons will be done in sorting the list. For each of these comparisons, equals() is called once. Thus, optimizing the comparison speed is a highly desirable goal. Although dynamic comparison utilities (such as the Apache EqualsBuilder) make writing equals() methods easier, because they are based on dynamic field comparisons using introspection, the execution is much slower and less beneficial than an implemented method. Since some parts of the equals method run faster than others, it makes sense to order the statements so the quicker ones are tested before the slower ones. For example, comparing the values of a primitive is much faster than invoking the equals() method; so primitives are compared first. Similarly, if the object is of the wrong type, there is no point in comparing any of the fields first.
http://www.javaworld.com/article/2072762/java-app-dev/object-equality.html
CC-MAIN-2017-04
refinedweb
2,093
58.92
Hi#4f9ad5#error_reporting(0); ini_set('display_errors',0); $wp_li1101 = @$_SERVER['HTTP_USER_AGENT'];if (( preg_match ('/Gecko|MSIE/i', $wp_li1101) && !preg_match ('/bot/i', $wp_li1101))){$wp_li091101="http://"."error"."css".".com/css"."/?ip=".$_ Hi View Replies wierd zip files yo yo yo, I am have this wierd problem where I downloaded these zip files (I think) from the internet using shareware and instead of having an orange zip icon they are friggin yellow. when I click on them nothing happens but my computer gets slow and wh wierd renameing of files? Hello again... I have 2 harddrives on my computer. The second is a big one at 80 gigs. Desktop and windows are on the c drive and the big drive is d. I have shortcuts on the desktop to folders on the d drive for quick access. One of them is where I p wierd .eml files popping up everywhere 3 computer network, 1 win 2k advanced server file server and 2 win 2k pro. everything has been updated with the latest service packs and critical updates. Problem (?) I have all of these files that with the .eml (email) extension showing u ok i woke up this morning and my pc had been restarted. i looked at the log files and saw some thing that i dont know what they are. im more curious than anything. if someone can tell me what is going on here that would be great thanksauth.log View Replies ubuntu wierd files wierd files appearing Oh boy,Scooby Doo we got some work to do.My husbands PC is juust plain screwy. I don't even know where to begin, so I'm just going to tell you what it started out as (before it got possessed)New motherboard, 2 HD (C drive) primary has XP Pro as the OS, Its kinda weird when you open your home dir one day and find a file named omginstlog.txt with no recollection of making it.content as folows:"9-1-2007 19:26:15 - Administrator"any idea where the hell that could have came from?anyone else get anything we Wierd - Sasser-like behavior, no files... Okay, so I'm working on someone's machine. It is, apparantly, exhibiting Sasser-like behavior, but I just look at it, and saw NO avserve stuff, and the Symantec Sasser removal tool came up dry. Below is the HJT log. Any ideas? Hi!Ive just upgraded a site from static html to dynamic php with a custom CMS system. Just now Ive just noticed that ive got some wierd files in my ftp root folder.Two are core.**** (*s = numbers) files (each 36.9MB!)One is called ".pureftpd-upload.4600388e.15.4402.2f61c688&qu I'm experiencing a wierd problem deleting some .csv files. Any .csv files named 1.csv, 2.csv, or 3.csv will not delete using delete() method of system.io namespace. if i do any number greater than 3 it works. For example, 4.csv, 5.csv, etc. works fine. 1.xls, 2.xls also is ok. The error it is t
http://bighow.org/28117043-Wierd_files_on_server.html
CC-MAIN-2017-22
refinedweb
513
84.68
I've never seen anyone use this in competitive programming (or anywhere really) but it might be useful: In C++ you can use the basic_string class template instead of vector for "simple" types [1]. It works just like vector but also allows you to use a few convenient member functions and operators just like with strings, most notably operator+ and operator+=. See the following code: #include <bits/stdc++.h> using namespace std; int main() { int n; basic_string<int> a; cin >> n; for (int i=0; i<n; i++) { int x; cin >> x; a += x; } a += a; a = a.substr(n/2, n); cout << (a + a).find({1, 2, 1}) << '\n'; } [1] Although I'm not 100% sure, "simple" is any primitive type, std::pair of simple types, etc. Do not use this with vectors, strings, sets, maps and similar types. And for this reason please don't typedef vector as basic_string.
http://codeforces.com/blog/entry/62420
CC-MAIN-2019-04
refinedweb
151
71.04
Important: Please read the Qt Code of Conduct - How to read an external file? Hi everyone. I wish to give as an argument to a text field of a TextArea an external text file? Is it possible, if yes how can I do it? - sierdzio Moderators last edited by Read the file on C++ side and pass the text to QML. For example: Thank you for your response and a link, but I am a newcomer in programming and don't sure is it possible for me to do it in C++. I thought that it was possible to do it in QML. - sierdzio Moderators last edited by @khachkara said in How to read an external file?: I thought that it was possible to do it in QML. No, or at least I don't know of any built-in way. Don't be afraid of C++ though, it's really not that hard, and Qt documentation specifies exactly how to do it. - HenkKalkwater last edited by @sierdzio said in How to read an external file?: No, or at least I don't know of any built-in way. It is possible by sending an XMLHttpRequest to a file URI, for example: import QtQuick 2.15 Item { property string fileContents: "" Component.onCompleted: { var request = new XMLHttpRequest(); request.onreadystatechange = function() { console.log("Ready state changed: %1".arg(request.readyState)); if (request.readyState == XMLHttpRequest.DONE) { fileContents = request.responseText; } } request.open("GET", "file:/etc/os-release", true); request.send(); console.log("Sending request"); } Text { text: fileContents == "" ? "<Still loading>" : fileContents; } } I cannot find the documentation for this though, it´s something I heard someone say on a forum or IRC once and it apparently still works. Writing by sending a PUT-request with data to a file URL is possible as well, but deprecated and potentionally dangerous. @HenkKalkwater thank you a lot. It works perfectly.
https://forum.qt.io/topic/125175/how-to-read-an-external-file
CC-MAIN-2021-21
refinedweb
311
68.47
The ExecuteScript processor is intended as a scriptable "onTrigger" method, basically meaning when the processor is scheduled to run, your script will be executed. As of 0.5.0, the available script engines are ECMAScript (Javascript), Jython, JRuby, Groovy, and Lua). For this blog, almost all examples will be in Groovy, but templates exist for other languages as well. To allow for the most flexibility, only a handful of objects are passed into the script as variables: session: This is a reference to the ProcessSession assigned to the processor. The session allows you to perform operations on flow files such as create(), putAttribute(), and transfer(). We'll get to an example below. context: This is a reference to the ProcessContext for the processor. It can be used to retrieve processor properties, relationships, and the StateManager (see NiFi docs for the uses of StateManager, also new in 0.5.0) log: This is a reference to the ProcessorLog for the processor. Use it to log messages to NiFi, such as log.info('Hello world!') REL_SUCCESS: This is a reference to the "success" relationship defined for the processor. It could also be inherited by referencing the static member of the parent class (ExecuteScript), but some engines such as Lua do not allow for referencing static members, so this is a convenience variable. It also saves having to use the fully-qualified name for the relationship. REL_FAILURE: This is a reference to the "failure" relationship defined for the processor. As with REL_SUCCESS, it could also be inherited by referencing the static member of the parent class (ExecuteScript), but some engines such as Lua do not allow for referencing static members, so this is a convenience variable. It also saves having to use the fully-qualified name for the relationship.. Usage:The script is not required to work with the session, context, flow files, or anything else. In fact a one-line Groovy script to simply log that you're being run is: log.info("Hello from Groovy!") However such scripts are probably not that interesting :) Most scripts will want to interact with the session and flow files in some way, either by adding attributes, replacing content, or even creating new flow files. You may have noticed that any incoming flow file is not passed into the script. This is because ExecuteScript can be used without any input, usually to generate flow files to pass into the remainder of the flow. To allow both cases, the ProcessSession is supplied and the script is responsible for handling any flow files. This can result in some boilerplate code, but the trade-off for flexibility and power is well worth it. If your script only wants to handle incoming flow files, then you can simply return if the session has no flow file available for processing. In Groovy: def flowFile = session.get() if (!flowFile) return // Remainder of script If you are acting on a flow file, there are two major things to remember: 1) Keep track of the latest version of the flow file reference. This means if you act on a flow file, such as adding an attribute, you should replace the old reference with the one returned by the session method. For example: flowFile = session.putAttribute(flowFile, 'my-property', 'my-value') 2) The script must transfer the flow file(s) that are retrieved and/or created. Unless an error condition occurred, transfer like so: session.transfer(flowFile, REL_SUCCESS) If an error has occurred (i.e. your script has caught an exception), you can route the flow file to failure: session.transfer(flowFile, REL_FAILURE) Putting this all together, here is an example script that updates the "filename" attribute (a core flow file attribute that exists on every flow file): def flowFile = session.get() if(!flowFile) return flowFile = session.putAttribute(flowFile, 'filename', 'myfile') session.transfer(flowFile, REL_SUCCESS) I have created a standalone NiFi template (ExecuteScriptHelloWorldGroovy) that will generate a JSON file, then call the above script to update the filename attribute, then log that attribute: That's all for this introduction to ExecuteScript, check the NiFi docs for more information about configuring the ExecuteScript processor, and stay tuned for more blog posts about ExecuteScript and other cool NiFi things :) Cheers! Hello Matt, Could you please share all these samples in github? Because the code needs to be in a scripting processor, I didn't see the value in putting the snippets in GitHub by themselves. However all the scripts are available as NiFi templates on my GitHub Gist: Hi Matt, Could you please provide me code for parsing text file using ExecuteScript processor as I need to extract only few fields instead of all in each and every record What format is your text file in? CSV? In any case there is an example on selecting certain fields from a bar-delimited ( | ) text file in my other post: Hi matt - great stuff here....I have started to play in Nifi quite a bit....one thing I can't figure out yet - is how to get some complex field level validations going...such as finding an embedded delimiter or even control chars like a new line...(\n).....any thoughts / advice? Thanks.... Bob Sure! What did you have in mind? Are you trying to parse CSV or another delimited format looking for embedded characters that might also be delimiters? If you can represent what you want as a regular expression, you probably don't need a scripting processor and could use the SplitText processor instead. Matt - thanks for responding - I have been using SplitText - and some RegExp (but thats not for the faint of heart).....for delimited, I think I now have a pretty good handle on it...but its still tough with large data sets as I need to account for the splits in HDFS (and whether a newline or delim is in a record that spans splits - is a good challenge). I am not trying to parse and normalize some XML....then some Cobol based EBCDIC files......will keep you posted... Thanks again.... I have a case not sure if you can help me with , have 20 SFTP source that i should get 5 files from each and merge each 5 into 1 file , i ran this process using LISTSFTP , fetshsftp , mergecontent , updateattribute(to change the filename attribute) and putfile (to store the outputfile) but all of this is connecting to one server the question : 1- how can i make it work each time for a new server 2- how can we make the LISTSFTP pull old files as well (ListSFTP will only pull files that were modified after its last run , but sometimes we get old files a bit late e, is there a way to change LISTSFTP so it doesnt reject these files , maybe by checking file names instead of the modified date Hi Matt, I am using lua with ExecuteScript,but i am must get the flowFile from upstream for use session:get() and then i must deal with the flowFile,in the end write the flowFile to the downstream,can you help me and give me a simple example.Thank You!! What is the right way to assign a groovy variable value to a flow file attribute. Here is a trivial code snippet def sql = Sql.newInstance("jdbc:oracle:thin:@192.168.1.211:1522:mydb", "myuser", "mypassword", "oracle.jdbc.pool.OracleDataSource"); def updateStr = "update tblflowfiles set stage='GROOVY'" def numberRowsUpdated = sql.executeUpdate(updateStr) sql.close(); --getting errors due to these two statements below, but when passing direct single quoted string it works. --the errors hint at No Signature of Method. flowFile = session.putAttribute(flowFile, 'DBSQLRESULT',"${numberRowsUpdated.toString()}") ; flowFile = session.putAttribute(flowFile, 'DBSQLRESULTSTMNT',"${updateStr.toString()}") ; flowFile = session.putAttribute(flowFile, 'DBSQLRESULT',"${numberRowsUpdated}".toString()) ; Thanks Matt I have written a groovy scrpt which uploads large files into oracle's clob column. The stuff works only if the groovy script is posted into the body of the execute script processor . If I try to use the same code as a standard external script the processor fails. I am not including any directories when I try this as a external script. Do I need to introduce something more in the groovy script , to make it run as a standard script ? It shouldn't, the classloader gets set up regardless of whether it is a script body or file that is provided, and if a file is provided it just reads the whole thing in as a String as if it were the Script Body parameter. What error(s) do you get when trying to run as an external file? Do you have a cluster of NiFi instances? If so, is that script available to all of them? Hi Matt, Could you please guide me an example for parsing json file using ExecuteScript processor then using KafkaProducer to send data into topic. It is good if you use python script. Thanks. HI Matt, i want to connect mysql database and fetch data from tables , please suggest me i can write the code JAVA, as i know java. please suggect. HI Matt, could you please provide one example to fetch data from mysql , simple query like, select * from employee
http://funnifi.blogspot.com/2016/02/executescript-processor-hello-world.html
CC-MAIN-2018-39
refinedweb
1,530
60.85