text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hi all!
I am having trouble accessing dynamic information. I have used the malloc command to allocate the necessary memory for my struct. However, when I leave that function and enter a display function it says there is a problem accessing the information in the struct.
I just had a project with pointers and would like to continue practicing with them to hopefully get a handle on them. :) I'm guessing there is a scope issue going on but I'm not sure how to bypass this issue. The following is the code I have written so far.
Inventory.h
/* Author: Dmytri Eck Date: October 29, 2011 */ #ifndef _Inventory #define _Inventory /* Error message */ #define ERROR_INVALID_SEARCH_OPTION "Invalid search option.\n" #define ERROR_NO_BOOK_FOUND "No book fuond matching the search criterion.\n" #define MAXIMUM_TITLE 20 #define MAXIMUM_AUTHOR 20 #define MAXIMUM_BOOKS 10 /*Structure for book information */ typedef struct Information { char subject [MAXIMUM_TITLE]; char author [MAXIMUM_AUTHOR]; int edition; int year; float price; } Book; static Book *Information; /*Structure for book search result */ typedef struct Search { Book booksFound[MAXIMUM_BOOKS]; int searchMatches; } BookSearchResult; /*Interface functions for this application Function to read the book information It returns the structure of type book*/ Book getBookInformation(int inventory); /*Function to earch books in the inventory It requires an array of books (inventory) which is to be searched and the size of this array along with the search criterion. Following are the valid options and corresponding search methods: option = 0; Search for books published in the year value option = 1; Search for books with price less than or equal to value It returns the search result.*/ BookSearchResult searchBook(Book *, int size, int option, float value); /*Function to display information of all the books in the array It has two arguments, an array of type book and size of this array.*/ void displayResult(int); #endif
Main.c
#include <stdio.h> #include <stdlib.h> #include "Inventory.h" int main(void) { //Variable Declaration int inventory = 0; //Number of books in database int search; //Search selection int year; //Search option (line 45) float price; //Upper price limit //Debugging purposes ONLY int counter = 0; printf("How many books are in the database?\n> "); scanf("%d", &inventory); getBookInformation(inventory); //Get book information displayResult(inventory); }
Inventory.c
#include <stdio.h> #include <stdlib.h> #include "Inventory.h" Book getBookInformation(int book_number) { int counter = 0; //Counts the books in the database Book *Information = NULL; //Allcoate memory and Validate for Information Information = (Book *) malloc(book_number * sizeof(Information)); //Validating inventory value if(book_number > MAXIMUM_BOOKS) { fprintf(stderr, "You have exceeded inventory limit (10).\n"); exit(0); } else if(book_number <= 0) { fprintf(stderr, "ERROR: Invalid number of books."); exit(0); } if (Information == NULL) { fprintf(stderr, "Not enough memory available. Program terminating.\n"); exit(0); } else for (counter = 0; counter < book_number; counter++) { printf("The following information is required to place the book in the database.\n"); printf("Book Subject, Author, Year, Edition, and Price\n"); //Book information prompt printf("1. What is the title of the book?\n> "); scanf("%s", &Information[counter].subject); printf("\n\n2. Who is the author?\n> "); scanf("%s", &Information[counter].author); printf("\n\n3. What year was the book published?\n> "); scanf("%d", &Information[counter].year); printf("\n\n4. What Edition is the book?\n> "); scanf("%d", &Information[counter].edition); printf("\n\n5. How much does the book cost?\n> "); scanf("%f", &Information[counter].price); } } void displayResult(int totalMatch) { int counter = 0; printf("This are the books that are in the database.\n"); for (counter = 0; counter < totalMatch; counter++) { printf("> %s\n", (*Information).subject); printf("> %s\n", (*Information).author); printf("> %d\n", (*Information).edition); printf("> %d\n", (*Information).year); printf("> %f\n", (*Information).price); } }
Might there be a problem with the location that I have placed my pointer? When I place the pointer declaration in the header file I get an error when I try to change it in inventory.c
Any help would be greatly appreciated. Thanks! | https://www.daniweb.com/programming/software-development/threads/390787/losing-dynamic-information | CC-MAIN-2020-29 | refinedweb | 647 | 50.02 |
Python Study Guide: Your New Home: Python IDLE
Now that you've downloaded and installed Python, it's time to continue your studies. You can program in Python using any editor/development environment your like. But, a quite capable IDE comes with Python, Python IDLE. Here, you'll find out what IDLE does and how you can integrate it into your development workflow.
On Windows and on the Mac, the installer for Python automatically installs idle. To launch in Windows, tap the Windows key and type idle. On Mac, use Cmd+Space and type idle.
With Linux, you may or may not have to install it separately, depending on how you install it and the package source. Once you've got it, just type idle to run it.
Figure 1: The IDLE opening screen
The Interactive Interpreter
As you can see, this is not an editor where you begin typing your application. This is the Python Interactive Interpreter, a shell where you can enter commands and immediately see the results.
For example, type this line and press Enter.
>>> a = 42
Now, type this and press Enter.
>>> print(a)
The value you assigned to a is displayed. This interactive Python shell is a quick and easy way to try out new code or to hammer out an unfamiliar syntax, whenever you need to.
You'll often see example code shown, as the previous two lines, with the >>> preceding. This suggests that you type in these examples with the Interactive Interpreter.
The Editor
To open the editor, open the File menu and choose New File. A very simple looking editor appears.
Figure 2: The editor's window
Don't let the simple window fool you, though. The editor provides a host of features to make your development life easier. Among them are:
- Multi-windows text editor with search/replace within and across files
- Syntax highlighting
- Autocompletion and autoindent
- Smart-indentation
- Multiple undo
- Class browser and class browser
- Integrated debugger
- Persistent breakpoints, stepping
- Stack viewing and local/global namespace viewing
The Process
We haven't yet talked about Python's commands or syntax, but go ahead and type the following program into the editor window.
theirName = raw_input('Kind sir, what may I call you?\n') print 'Good day, %s. Good day indeed!' % theirName
Now, open the File menu and choose Save As…. Save the file to your hard drive and name it greeting.py.
Figure 3: The greeting.py code, colored to assist you
As you can see, the editor automatically colored different parts of your program in different ways. This is syntax highlighting and it makes reading code much easier—especially when there's a lot of it.
To run your program, open the Run menu and choose Run Module (note the shortcut key so you can use that later, rather than navigating the menus each time). A new window will open where the application runs. Oblige the good man and type your name, and press Enter.
Figure 4: Entering your name and getting a response
Works like a charm. Go back to the editor and add this line after the others.
print('Rare weather today, aye?')
Now, run the program again. You may be prompted to save the modified program.
Figure 5: Notice the name change and the new response
The updated program runs in the same shell window, but it's easy to distinguish one run from the next.
Conclusion
You've discovered the IDLE editor, seen some of its features, and walked through the simple procedure for entering running and modifying programs. Congratulations—you're now ready to dive into the Python language!
| http://www.developer.com/open/python-study-guide-your-new-home-python-idle.html | CC-MAIN-2016-40 | refinedweb | 604 | 65.62 |
At 15:26 04.12.2002 +0000, Nic Ferrier wrote:
Brian Jones <address@hidden> writes: > Norbert Frese <address@hidden> writes: > > > i think it should be a 'gnu.javax' library, because the 'java.net' > > library looks like it was designed for ip-sockets only. > > please tell me, which package-name you would like - that i can start > > converting the sources. > > what do you think of the package name > > > > 'gnu.javax.net.afunix' > > or > > 'gnu.javax.net.uxlocal'? > > I think Nic Ferrier maintains a list of gnu.* packages who is using > them on the FSF java web page, maybe it fits in the gnu.inet > namespace. None of the projects seem to be interested in using > gnu.javax that I see listed there. > > That's right Brian. I think this should be the package name: gnu.net.local > > besides it would be nice to allow classpath-independent building of > > the library - for using it with other vm's. i don't know about the > > legal issues here - for instance with the name 'gnu' - would it be > > possible to publish such a classpath-independent version via > > sourceforge? > > Non-FSF projects can be hosted on Savannah. I wouldn't recommend > Sourceforge. Can you also check out the ClasspathX project's version of the socket provider library? It's supposed to be used to provide new and interesting socket implementations. It's in the CVS at: Nic
hi! ok. i will use 'gnu.net.local'. but i was not able to find the socketprovider library in the classpathx cvs (using web-cvs interface). did it move to another place?
norbert | https://lists.gnu.org/archive/html/classpath/2002-12/msg00040.html | CC-MAIN-2019-43 | refinedweb | 265 | 69.68 |
.
This tutorial provides several mongodump and mongorestore command examples that you can use to backup and restore a MongoDB using both .
I. Backup Mongo Database
1. Backup by Shutting down Mongod Instance
This is equivalent to the cold backup you would typically take in other database systems. You should shutdown the mongodb instance before taking backup using this method. In this example, you are not really connecting to the mongod instance to take this backup.
If you don’t want to shutdown the mongod to take backup, see the examples in the later section of this tutorial.
First stop the mongod instance:
service mongod stop
Go to the backup directory (or any directory where you want to store the mongodb backup), and execute the following mongodump command. The –dbpath indicates the location of the mongodb database files.
cd /backup mongodump --dbpath /var/lib/mongo/
Finally, start the mongod instance:
service mongod start
The above mongodump directory will create a dump sub-directory under the current directory. As you see below, it has taken a backup of both mongodevdb and mongoproddb database.
# ls -l dump drwxr-xr-x. 2 root root 4096 Sep 7 09:59 admin drwxr-xr-x. 2 root root 4096 Sep 7 09:59 mongodevdb drwxr-xr-x. 2 root root 4096 Sep 7 09:59 mongoproddb
If you look inside one of the database backup directory, you’ll see that it contains all the objects from the database that was backed-up (for example: employee and department collections) .
# ls -l dump/mongodevdb total 40172 -rw-r--r--. 1 root root 4848 Sep 7 09:59 employee.bson -rw-r--r--. 1 root root 106 Sep 7 09:59 employee.metadata.json -rw-r--r--. 1 root root 2840 Sep 7 09:59 department.bson -rw-r--r--. 1 root root 108 Sep 7 09:59 department.metadata.json
2. Backup without Shutting down Mongod Instance
The following example will connect to a running mongod instance, and take backup of a specific database.
First, make sure mongod is up and running.
# service mongod start
Next, go to the backup directory, and execute the mongodump command, and pass the database name, username and password parameters as shown below.
# cd /backup # mongodump --db mongodevdb --username mongodevdb --password YourSecretPwd connected to: 127.0.0.1 Tue Sep 7 11:47:06.868 DATABASE: mongodevdb to dump/mongodevdb Tue Sep 7 11:47:06.873 mongodevdb.system.indexes to dump/mongodevdb/system.indexes.bson Tue Sep 7 11:47:06.890 5 objects Tue Sep 7 11:47:06.890 mongodevdb.system.users to dump/mongodevdb/system.users.bson Tue Sep 7 11:47:06.931 1 objects Tue Sep 7 11:47:06.931 Metadata for mongodevdb.system.users to dump/mongodevdb/system.users.metadata.json Tue Sep 7 11:47:06.931 mongodevdb.employee to dump/mongodevdb/employee.bson Tue Sep 7 11:47:06.933 4 objects Tue Sep 7 11:47:06.933 Metadata for mongodevdb.employee to dump/mongodevdb/employee.metadata.json Tue Sep 7 11:47:06.937 71 objects ... ..
Under the /backup directory. i.e From where you executed the mongodump command, it will create a dump directory as shown below. The dump directory will have a sub-directory for the database that was just backed-up.
# ls -l dump/ drwxr-xr-x. 2 root root 4096 Sep 7 10:08 mongodevdb
If you do a ls on this dump/mongodevdb, you’ll see all the collections from this database that was backed-up by mongodump command.
# ls -l dump/mongodevdb total 48 -rw-r--r--. 1 root root 4871 Sep 7 11:47 employee.bson -rw-r--r--. 1 root root 106 Sep 7 11:47 employee.metadata.json -rw-r--r--. 1 root root 425 Sep 7 11:47 system.indexes.bson -rw-r--r--. 1 root root 94 Sep 7 11:47 system.users.bson -rw-r--r--. 1 root root 239 Sep 7 11:47 system.users.metadata.json ..
If the mongo instance has multiple database (for example, mongodev and mongoprod), execute the mongodump command couple of time as shown below to backup both the database.
# mongodump --db mongodevdb --username mongodevdb --password YourSecretPwd # mongodump --db mongoproddb --username mongoproddb --password YourSecretProdPwd
If you don’t want to specify the password in the mongodump command line, you can also enter the password interactively.
# mongodump --db mongodevdb --username mongodevdb --password Enter password:
Please note that you’ll get the following error message if you don’t pass any parameters to the mongodump command.
# mongodump Fri Sep 7 09:54:27.639 ERROR: output of listDatabases isn't what we expected, no 'databases' field: { ok: 0.0, errmsg: "unauthorized" }
3. Backup a specific Collection
Instead of backing up all the collections in a particular database, you can also backup specific collections.
The following example connects to mongodevdb database and does a backup of only employee collection.
# cd /backup # mongodump --collection employee --db mongodevdb --username mongodevdb --password YourSecretPwd connected to: 127.0.0.1 Fri Sep 7 10:13:45.927 DATABASE: mongodevdb to dump/mongodevdb Fri Sep 7 10:13:45.927 mongodevdb.employee to dump/mongodevdb/employee.bson Fri Sep 7 10:13:45.928 4 objects Fri Sep 7 10:13:45.928 Metadata for mongodevdb.employee to dump/mongodevdb/employee.metadata.json
Also, if you are trying to execute mongodump when the mongoDB instance is not up and running, you’ll get the following error message.
# mongodump --db mongodevdb --username mongodevdb --password YourSecretPwd couldn't connect to [127.0.0.1] couldn't connect to server 127.0.0.1:27017
4. Backup to a specific Location
In all the above examples, mongodump created a dump directory under the current directory from where the command was executed.
Instead, if you want to backup mongoDB to a specific location, specify the –out parameter as shown below.
The following example takes a backup of employee collection and stores it under /dbbackup directory.
# mongodump --collection employee --db mongodevdb --username mongodevdb --password YourSecretPwd --out /dbbackup
In this case, under the /dbbackup directory, mongodump command will create a sub-directory for the database that it getting backed-up and all the collections will be backed-up under that sub-directory as shown below.
# ls -ltr /dbbackup/mongodevdb -rw-r--r--. 1 root root 4848 Sep 7 10:19 employee.bson -rw-r--r--. 1 root root 106 Sep 7 10:19 employee.metadata.json ..
5. Backup a Remote Mongodb Instance
In all the previous example we executed the mongodump command from the same server where the mongo database instance was running.
However, you can also connect to a mongodb instance running on a different server, and take a backup of that.
In the following example, the mongodump command is executed on a server called “local-host”, but it connects to the mongodb instance running on 192.168.1.2 and takes the backup of that instance and stores it in the local-host.
[local-host]# mongodump --host 192.168.1.2 --port 37017 --db mongodevdb --username mongodevdb --password YourSecretPwd
II. Restore Mongo Database
Once you’ve taken the backup of a MongoDB database using mongodump, you can restore it using mongorestore command. In case of an disaster where you lost your mongoDB database, you can use this command to restore the database. Or, you can just use this command to restore the database on a different server for testing purpose.
1. Restore All Database without Mongod Instance
If you’ve taken a backup without mongod instance, use this method to restore the same backup without running the mongod instance.
First, stop the mongod
service mongod stop
Next, go to the directory where the backup is located, and execute the restore command as shown below.
cd /backup mongorestore --dbpath /var/lib/mongo dump
Note: In the above command, the last parameter “dump” is the directory name where the backup are stored. In this example, since we did a “cd /backup”, before executing the mongorestore, and specified “dump” as the directory name, this will take the backup from /backup/dump directory, and restore it.
2. Restore a specific Database without Mongod Instance
If you’ve backedup several mongodb database and like to restore only a specify database (instead of all the database), you can specify the database that you like to restore using the –db argument as shown below. The following example will restore only the mongodevdb.
cd /backup mongorestore --dbpath /var/lib/mongo --db mongodevdb dump/mongodevdb
3. Drop the old Database before Restoring
In the above two examples, mongorestore will perform a merge if it sees that the database already exists. If you don’t understand how the merge works, the above two restore will give you unexpected results. As you see below, it is giving a warning message for every collection that it is trying to restore, but is already present in the destination database.
# cd /backup # mongorestore --dbpath /var/lib/mongo --db mongodevdb dump/mongodevdb Tue Sep 7 11:27:32.454 [tools] dump/mongodevdb/employee.bson Tue Sep 7 11:27:32.454 [tools] going into namespace [mongodevdb.employee] Tue Sep 7 11:27:32.465 [tools] warning: Restoring to mongodevdb.employee without dropping. Restored data will be inserted without raising errors; check your server log 7184 objects found
If you want a clean restore, use the –drop option. If a collection that exist in the backup also exist in the destination database, mongorestore command will now drop that collection, and restore the one from the backup. In this example, as you see below, it is dropping the objects before restoring it.
# mongorestore --dbpath /var/lib/mongo --db mongodevdb --drop dump/mongodevdb Tue Sep 7 11:34:04.946 [tools] dump/mongodevdb/employee.bson Tue Sep 7 11:34:04.946 [tools] going into namespace [mongodevdb.employee] Tue Sep 7 11:34:04.946 [tools] dropping Tue Sep 7 11:34:04.946 [tools] CMD: drop mongodevdb.employee Tue Sep 7 11:34:05.022 [tools] build index mongodevdb.employee { _id: 1 } Tue Sep 7 11:34:05.028 [tools] build index done. scanned 0 total records. 0.006 secs 7184 objects found
4. Restore to a Remote Database.
In all the previous examples we executed the mongorestore command from the same server where the mongo database instance was running.
However you can also restore a mongo backup to a mongodb instance running on a different server.
In the following example, the mongorestore command is executed on a server called “local-host”, but it restores the mongo database backup located on the local-host to the mongodb instance running on 192.168.1.2 server.
mongorestore --host 192.168.1.2 --port 3017 --db mongodevdb --username mongodevdb --password YourSecretPwd --drop /backup/dump
{ 6 comments… add one }
If I run MongoDB in replication mode, is it still recommended to take backup using mongodump?
Thanks. Very detailed and helpful.
These dont work very well when you have huge DB’s. Its best to NOT use these tools and start with snapshots.
Thank you. Very helpful indeed.
Am new to mongo db.
How do I restore a mongo backup to a different database?
For ex: in mysql, I can use
mysql -u user -p password new_database < old_database_backup.sql
if I just copy the db folder to a different name, will this automatically consider this as a new db (after a mongod service restart)?
In my opinion the most important part of creating backups with mongodump is user permissions which are pain in a** to configure, at the moment i have 4 users none of them can make full dump of a database
The “4. Restore to a Remote Database.” section has invalid port number 3017, it should be 37017 | http://www.thegeekstuff.com/2013/09/mongodump-mongorestore/ | CC-MAIN-2015-22 | refinedweb | 1,963 | 50.43 |
createDOMDocument()Function
The first step to creating a cross-browser interface is to
have a common way to create the DOM Document. The
createDOMDocument()
method will serve this purpose. The
first step is to determine which browser is being used and then create the
DOM Document in the appropriate way. However, IE throws a little wrinkle at us.
The DOM Document in IE is an ActiveX object, as discussed before. What this means for us is that even though a user has IE 5.0 on their computer, they may not necessarily have the latest version of MSXML. Of course, we want to use the best and most current version, but how do we know what the user's machine currently has installed?
Unfortunately, there is no pretty way to determine what
version of MSXML the user has on their machine. The only way to determine if an
ActiveX object can be created is to try to create it. If the ActiveX object is
not available on the client machine, it will cause a JavaScript error. So in
order to find out what version of MSXML the user has installed, we have to try
to create each ActiveX object and look for one that doesn't cause an error
using a
try...catch block.
First, we'll define the array of possible ActiveX objects to use. This array is in the order of most recent version to least recent, which will allow us to cycle through the array in order to make sure we get the most current version available on the user's machine:
var ARR_ACTIVEX = ["MSXML4.DOMDocument", "MSXML3.DOMDocument", "MSXML2.DOMDocument", "MSXML.DOMDocument", "Microsoft.XmlDom"]
Next, we define a constant string that will be filled with the appropriate prefix when it is determined:
var STR_ACTIVEX = "";
Now, here comes the
try...catch block:
//if this is IE, determine which string to use if (isIE) { //define found flag var bFound = false; //iterate through strings to determine which one to use for (var i=0; i < ARR_ACTIVEX.length && !bFound; i++) { //set up try...catch block for trial and error //of strings try { //try to create the object, it will cause an //error if it doesn't work var objXML = new ActiveXObject(ARR_ACTIVEX[i]); //if it gets to this point, the string worked, //so save it STR_ACTIVEX = ARR_ACTIVEX[i]; bFound = true } catch (objException) { } //End: try } //End: for //if we didn't find the string, send an error if (!bFound) throw "MSXML not found on your computer." }
With the ActiveX string now determined, we can continue on
to create the
createDOMDocument() method with a
simple browser test:
jsXML.createDOMDocument = function() { //variable for the created DOM Document var objDOM = null; //determine if this is a standards-compliant browser like Mozilla if (document.implementation && document.implementation.createDocument) { //create the DOM Document the standards way objDOM = document.implementation.createDocument("","", null); } else if (isIE) { //create the DOM Document the IE way objDOM = new ActiveXObject(STR_ACTIVEX); } //return the object return objDOM; }
The next step in this process is to determine the parameters for this method. In IE's interface, there are no parameters to the creation of the DOM Document. In Mozilla's interface, there are three parameters. The first two are the namespace and tag name for the root node (document element) of the DOM Document that is being created. The third is an object representing the document type that is being created. The third parameter has not yet been activated in Mozilla (according to their documentation), so only the first two are of interest.
If we were to create a DOM Document in Mozilla like this:
var objDOM = document.implementation.createDocument("", "myroot", null);
The resulting XML string would be:
<a0:myroot xmlns:
That seems to be a pretty useful thing, to be able to
initialize the DOM Document with a namespace and root tag name, so let's make
two parameters for our
createDOMDocument()
method: the namespace and root tag name. The important thing to note is that if
there is only a namespace and no root tag name specified, it has no effect. If,
however, there is a root tag name specified and no namespace, it still works to
produce an XML string.
So first, we add the parameters into the function definition:
jsXML.createDOMDocument = function(strNamespaceURI, strRootTagName) { ... }
For Mozilla, these two parameters can be passed through to the native creation method:
//create the DOM Document the standards way objDOM = document.implementation.createDocument(strNamespaceURI, strRootTagName, null);
In IE, we will have to do the work ourselves. We can check
for the parameters and use IE's proprietary
loadXML()
method to simulate Mozilla's implementation:
//create the DOM Document the IE way objDOM = new ActiveXObject(STR_ACTIVEX); //if there is a root tag name, we need to preload the DOM if (strRootTagName) { //If there is both a namespace and root tag name, then //create an artifical namespace reference and load the XML. if (strNamespaceURI) { objDOM.loadXML("<a0:" + strRootTagName + "xmlns:a0=\"" + strNamespaceURI + "\" />"); } else { objDOM.loadXML("<" + strRootTagName + "/>"); } }
Created: June 13, 2002
Revised: June 13, 2002
URL: | http://www.webreference.com/programming/javascript/domwrapper/2.html | crawl-002 | refinedweb | 837 | 51.48 |
In this video you will learn about creating modules and components in Angular. We always want to split our application in smaller parts and in Angular we have for this 2 entities: modules and components. We will create usersList module and component to structure our application and avoiding putting too much stuff in our app.module and app.component.
In previous video we learn how Angular project is loading. And we saw the main component of the application. It's app.component. Now let's create a component from scratch, so you see how it's all working together.
Let's say that we want to render a list of users. We can't just write the whole application inside app component this is why we create new component for every scoped part of our application. And rendering a list of users sounds exactly like it.
Let's also create additional folder for our component. Because we already have to much stuff inside src/app.
Inside we need a template and a typescript file. We name them with .component by angular naming convention. Then it's clear for us that it's a component.
import { Component } from '@angular/core'; @Component({ selector: 'app-users-list', templateUrl: './usersList.component.html', }) export class UsersListComponent {}
As you can see our component looks exactly like a appRoot component. We have here a selector which we will use to render this component and a path to template file. One more interesting thing is that we named selector with prefix app. We are doing this to understand that this is not a library but a component of our project. Because normally in library you will have a library prefix.
Now let's just write some basic text inside html file.
Now our component is fully created but we didn't use it anywhere. Just to remind you the last file that was parsed by angular was app.component.html. Angular doesn't know anything about our new component. So we need to call it inside app.component.html
<app-users-list></app-users-list>
If we open a browser we are getting an error
'app-users-list' is not a known element
And you will see this error in the future a lot. It means that our component is not registered anywhere and Angular doesn't know how to load and render it.
To fix it we need to register our component in a module. And for now we have only app module where we can register our component. Let's do this.
To register a component inside module we need to import it and add to declarations section. As you can see we have already AppComponent there.
Now there is no error in browser and our component is rendered.
Now you know how to create components in Angular and bind them.
But there is one problem here. Just imaging that we have created hundreds components in our application for usersList, products, authentication and other features. We can of course just register them all inside app.module but it will be a huge unsupportable mess.
What we normally do is isolate a bunch of components which are related to one feature inside a module. Let's create now an additional module userList which will be responsible for our userList component. And maybe later other components which are related to userList feature.
For this let's add a module inside userList.
import {CommonModule} from '@angular/common' import {NgModule} from '@angular/core' import {UsersListComponent} from './usersList.component' @NgModule({ declarations: [UsersListComponent], imports: [CommonModule], }) export class UsersListModule {}
As you can see we named our module also with .module.ts. Inside we have a declarations and imports arrays. Now in declarations we can declare our UsersListComponent. So we don't need to do it in AppModule anymore. Inside imports we have only CommonModule. We need to add it to every module that we create because it allowes us to write standard Angular stuff inside our module then.
Now let's just to app.module.ts and remove UsersListComponent declaration from there. Now we want to use inside our AppModule UsersListModule. In order to do so we need to import UsersListModule as a dependency of AppModule.
imports: [BrowserModule, AppRoutingModule, UsersListModule],
Now let's check if it works. As you can see we get the same error that Angular doesn't know what is usersList. Now it happens because everything is modular. UsersListComponent is registered inside UsersListModule but is not allowed to be used outside of this module. This is actually good because we have a module isolation by default.
To allow usage of UsersListComponent outside when we import UsersListModule we need to specify it in exports array.
@NgModule({ declarations: [UsersListComponent], imports: [CommonModule], exports: [UsersListComponent], }) export class UsersListModule {}
Now as you can see it is working again.
In this video you learned how to create modules and components in Angular. It is looking scary and complex at the beginning but it helps a lot with modular isolation and defining the dependencies of each module.
If angular for beginners is too easy for you or you want more advanced stuff check my 14 hours Angular course where we create a real project from beginning to the end. Link is also in the description. | https://monsterlessons-academy.com/posts/creating-your-first-angular-component-and-module | CC-MAIN-2022-40 | refinedweb | 878 | 58.69 |
0
I've found some similar posts on opening different files with the same file stream but they are difficult to understand. I'm trying to figure out for a larger project why I can't use the same file stream as in this example and what an alternative solution may be. And yes I am aware that this example opens the same file twice.
#include <iostream> #include <fstream> #include <string> using namespace std; int main () { string line;ofstream thefile; thefile.open ("example.txt"); thefile <<"writing this to a file. \n"; thefile.close(); ifstream myfile ("example.txt"); if (myfile.is_open()) { while (! myfile.eof()) { getline (myfile, line); cout << line << endl; } //myfile.close(); } else cout << "unable to open file"; cout<<line; cin>>line;//used only for a makeshift pause here myfile.open ("example.txt"); if (myfile.is_open()) { while (! myfile.eof()) { getline (myfile, line); cout << line << endl; } myfile.close(); } else cout << "unable to open file"; return 0; }
I'm using microsoft visual c++ 2010 express
vista 64 bit operating system
this bit of code is portions taken from a C++ language tutorial by Juan Soulie and slightly modified for this example.
Any help that would help me get an idea of how this doesn't work and a solution that would help me open multiple and diffent files would be greatly appreciated. | https://www.daniweb.com/programming/software-development/threads/378496/opening-multiple-files-with-the-same-file-stream | CC-MAIN-2018-39 | refinedweb | 219 | 64.51 |
Agenda
See also: IRC log
<Roland> 06zakim, who is on the phone?01
<Roland> 06Zakim, who is on the phone?01
<Roland> 06Scribe: markPhillips
roland: A couple of updates to make
1. The namespace needs to be updated
philippe: Recommend using ns namespace so that dates do not need to be updated
<Roland> 06 ?
<plh>
<Roland>
<plh>
The undated namespace would be the final namespace (for use at the CR stage). In the interim we should use a dated namespace for each draft of the spec.
<plh>
<plh> ""
<plh> ""
<plh> ""
RESOLUTION: First working draft will use the year and month namespace (e.g. ""). The final recommendation will be undated ( "")
roland: How should we construct a test plan?
<plh>
plh: No best practices per se. For WSDL 2 an interchange format was used to capture the WSDL component model and testing was performed by comparing the interchange formats created by different implementations
For wire formats they defined a canonical representation of SOAP messages described by WSDL then compared to the wire format messages generated by different implementations
PhilAdams: Do we need to specify the API calls used to construct messages? If so we need to document them in the spec.
alewis: The spec implies the calls which are necessary. The only thing we have in common is the API.
PhilAdams: The test suite could act as the receiver (or producer) for messages and validate them
alewis: The advantage of the WSDL model is that a declarative and correct representation of messages can be produced and validated in isolation
roland: could we write up the different proposals for discussion
<scribe> ACTION: PhilAdams to write up a test proposal due next week [recorded in]
<trackbot-ng> Sorry, couldn't find user - PhilAdams
<plh> trackbot-ng, status
<scribe> ACTION: Amelia to write up a test proposal due next week [recorded in]
<trackbot-ng> Created ACTION-6 - Write up a test proposal due next week [on Amelia Lewis - due 2008-06-03].
<plh> ACTION: Phil to write up a test proposal due next week [recorded in]
<trackbot-ng> Created ACTION-7 - Write up a test proposal due next week [on Phil Adams - due 2008-06-03].
This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) No ScribeNick specified. Guessing ScribeNick: markphillips Inferring Scribes: markphillips Default Present: alewis, Roland, Phil, Peter, Plh, Derek, +0196270aaaa, MarkPhillips, Bhakti Present: alewis Roland Phil Peter Plh Derek +0196270aaaa MarkPhillips Bhakti Agenda: Got date from IRC log name: 27 May 2008 Guessing minutes URL: People with action items: amelia phil philadams WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2008/05/27-soap-jms-minutes | CC-MAIN-2016-30 | refinedweb | 458 | 60.75 |
On Wed, 11 Oct 2006 12:20:05 +0100, Yan <inetuid@yahoo.co.uk> said: > Ian Jackson wrote: >> Furthermore, the SELinux patches I have seen in various >> applications have given me an extremely poor impression of the code >> quality[1]. This will probably extend to other areas of SELinux. >> >> I say, ditch SELinux. >> >> Ian. >> >> [1] Here's just one example, from src/archives.c in dpkg: >> >> #ifdef WITH_SELINUX >> /* >> * if selinux is enabled, restore the default security context >> */ if (selinux_enabled > 0) if(setfscreatecon(NULL) < 0) >> perror("Error restoring default security context:"); >> #endif /* WITH_SELINUX */ >> >> Error checking ? We don't need no steenking error checking, this >> is SECURITY software ! Quick, dump your brains and deploy it ! Assuming for an instant Ian may know what he is talking about, could an example be given about what the so called missing error checks are, by him or anyone else who knows what he is referring to? How would people code this differently? So far, I think the criticism reflect more of a lack of understanding of SELinux trhan anything else, but I would be happy if someone could show me the error of my ways. >; if there are things wrong in the system that dpkg can't set the initial file contexts for the packages being installed, it is reasonable to assume that you might have to relable your file system to recover from the error condition. manoj -- Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it. -- Perlis's Programming Proverb #58, SIGPLAN Notices, Sept. 1982 Manoj Srivastava <srivasta@debian.org> <> 1024D/BF24424C print 4966 F272 D093 B493 410B 924B 21BA DABB BF24 424C | https://lists.debian.org/debian-devel/2006/10/msg00480.html | CC-MAIN-2016-36 | refinedweb | 273 | 66.33 |
As you learned during last hour's introduction to object-oriented programming,
an object is a way of organizing a program so that it has everything it needs to
accomplish a task. Objects need two things to do their jobs: attributes and behavior.
Attributes are the information stored within an object. They can be variables
such as integers, characters, Boolean values, or even other objects. Behavior is
the groups of statements used to handle specific jobs within the object. Each of
these groups is called a method.
Up to this point, you have been working with the methods and variables of objects
without knowing it. Any time your statement had a period in it that wasn't a decimal
point or part of a string, chances are an object was involved. You'll see this during
this hour as the following topics are covered:
For the purposes of this hour's examples, you'll be looking at a class of objects
called Virus whose sole purpose in life is to reproduce in as many places
as possible--much like some of the people I went to college with. A Virus
has several different things it needs in order to do its work, and these will be
implemented as the behavior of the class. The information that's needed for the methods
will be stored as attributes.
Caution: The example in this hour will
not teach actual virus writing, though it might provide some insight into how virus
programs work as they wreak havoc on the file systems of the computer-loving world.
Sams.net had scheduled Learn Virus Programming in a Three-Day Weekend for
spring of this year, but the guide has been postponed because the author's hard drive
was unexpectedly erased on Michaelangelo's birthday.
The attributes of an object represent any variables that are needed in order for
the object to function. These variables could be simple data types such as integers,
characters, and floating-point numbers, or they could be arrays or objects of the
String or Graphics classes. An object's variables can be used throughout
its program in any of the methods the object includes. You create variables immediately
after the class statement that creates the class and before any methods.
One of the things that a Virus object needs is a way to indicate that
a file has already been infected. Some computer viruses change the field that stores
the time a file was last modified; for example, a virus might move the time from
13:41:20 to 13:41:61. Because no normal file would be saved on
the 61st second of a minute, the time is a sign that the file was infected. The Virus
object will use 86 as the seconds field of a file's modification time because
"86 it" is slang that means to throw something away--exactly the kind of
unpleasant antisocial connotation we're going for. The value will be stored in an
integer variable called newSeconds.
The following statements begin a class called Virus with an attribute
called newSeconds and two other attributes:
public class Virus {
public integer newSeconds = 86;
public String author = "Sam Snett";
integer maxFileSize = 30000;
All three variables are attributes for the class: newSeconds, maxFileSize,
and author.
The newSeconds variable has a starting value of 86, and the statement
that creates it has public in front of it. Making a variable public
makes it possible to modify the variable from another program that is using the Virus
object. If the other program attaches special significance to the number 92,
for instance, it can change newSeconds to that value. If the other program
creates a Virus object called influenza, it could set that object's
newSeconds variable with the following statement:
influenza.newSeconds = 92;
The author variable also is public, so it can be changed freely
from other programs. The other variable, maxFileSize, can only be used within
the Virus class itself.
When you make a variable in a class public, the class loses control over
how that variable is used by other programs. In many cases, this might not be a problem.
For example, the author variable can be changed to any name or pseudonym
that identifies the author of the virus, and the only restriction is aesthetic. The
name might eventually be used on court documents if you're prosecuted, so you don't
want to pick a dumb one. The State of Ohio v. LoveHandles doesn't have the same ring
to it as Ohio v. April Mayhem.
Restricting access to a variable keeps errors from occurring if the variable is
set incorrectly by another program. With the Virus class, if newSeconds
is set to a value of 60 or less, it won't be reliable as a way to tell that a file
is infected. Some files may be saved with that number of seconds regardless of the
virus, and they'll look infected to Virus. If the Virus class of
objects needs to guard against this problem, you need to do these two things:
A private protected variable can only be used in the same class as the
variable or any subclasses of that class. A private variable is restricted
even further--it can only be used in the same class. Unless you know that a variable
can be changed to anything without affecting how its class functions, you probably
should make the variable private or private protected.
The following statement makes newSeconds a private protected
variable:
private protected int newSeconds = 86;
If you want other programs to use the newSeconds variable in some way,
you'll have to create behavior that makes it possible. This task will be covered
later in the hour.
When you create an object, it has its own version of all the variables that are
part of the object's class. Each object created from the Virus class of
objects has its own version of the newSeconds, maxFileSize, and
author variables. If you modified one of these variables in an object, it
would not affect the same variable in another Virus object.
There are times when an attribute has more to do with an entire class of objects
than a specific object itself. For example, if you wanted to keep track of how many
Virus objects were being used in a program, it would not make sense to store
this value repeatedly in each Virus object. Instead, you can use a class
variable to store this kind of information. You can use this variable with any object
of a class, but only one copy of the variable exists for the whole class. The variables
you have been creating for objects thus far can be called object variables, because
they are tied to a specific object. Class variables refer to a class of objects as
a whole.
Both types of variables are created and used in the same way, except that static
is used in the statement that creates class variables. The following statement creates
a class variable for the Virus example:
static int virusCount = 0;
Changing the value of a class variable is no different than changing an object's
variables. If you have a Virus object called tuberculosis, you
could change the class variable virusCount with the following statement:
tuberculosis.virusCount++;
Because class variables apply to an entire class instead of a specific object,
you can use the name of the class instead:
Virus.virusCount++;
Both statements accomplish the same thing, but there's an advantage to using the
second one. It shows immediately that virusCount is a class variable instead
of an object's variable because you can't refer to object variables with the name
of a class. That's only possible with class variables.
Attributes are the way to keep track of information about a class of objects,
but they don't take any action. For a class to do the things it was created to do,
you must create behavior. Behavior describes all of the different sections of a class
that accomplish specific tasks. Each of these sections is called a method.
You have been using methods throughout your programs up to this point without
knowing it, including two in particular: println() in Java applications
and drawString() in applets. These methods display text on-screen. Like
variables, methods are used in connection with an object or a class. The name of
the object or class is followed by a period and the name of the method, as in screen.drawString()
or Integer.parseInt()..
You create methods with a statement that looks similar to the statement that begins
a class. Both can take arguments between parentheses after their names, and both
use { and } marks at the beginning and end. The difference is that
methods can send back a value after they are handled. The value can be one of the
simple types such as integers or Booleans, or it can be a class of objects. If a
method should not return any value, use the statement void.
The following is an example of a method the Virus class can use to infect
files:
boolean public infectFile(String filename) {
boolean success = false;
// file-infecting statements would be here
return success;
}
The infectFile() method is used to add a virus to a file. This method
takes a single argument, a string variable called filename, and this variable
represents the file that should be attacked. The actual code to infect a file is
omitted here due to the author's desire to stay on the good side of the U.S. Secret
Service. The only thing you need to know is that if the infection is a success, the
success variable is set to a value of true.
By looking at the statement that begins the method, you can see boolean
preceding the name of the method, infectFile. This statement signifies that
a boolean value will be sent back after the method is handled. The return
statement is what actually sends a value back. In this example, the value of success
is returned.
When a method returns a value, you can use the method as part of an assignment
statement. For example, if you created a Virus object called malaria,
you could use statements such as these:
if (malaria.infectFile(currentFile))
System.out.println(currentFile + " has been infected!");
else
System.out.println("Curses! Foiled again!");
Any method that returns a value can be used at any place a value or variable could
be used in a program.
Earlier in the hour, you switched the newSeconds variable to private
to prevent it from being set by other programs. However, because you're a virus writer
who cares about people, you still want to make it possible for newSeconds
to be used if it is used correctly. The way to do this is to create public
methods in the Virus class that use newSeconds. Because these methods
are public, they can be used by other programs. Because they're in the same
class as newSeconds, they can modify it.
Consider the following two methods:
int public getSeconds() {
return newSeconds;
}
void public setSeconds(int newValue) {
if (newValue > 60)
newSeconds = newValue;
}
The getSeconds() method is used to send back the current value of newSeconds.
The getSeconds() method is necessary because other programs can't even look
at newSeconds because it is private. The getSeconds()
method does not have any arguments, but it still must have parentheses after the
method name. Otherwise, when you were using getSeconds in a program, the
method would look no different than a variable.
The setSeconds() method takes one argument, an integer called newValue.
This integer contains the value that a program wants to change newSeconds
to. If newValue is 61 or greater, the change will be made. The setSeconds()
method has void preceding the method name, so it does not return any kind
of value.
As you have seen with the setSeconds() method, you can send arguments
to a method to affect what it does. Different methods in a class can have different
names, but methods can also have the same name if they have different arguments.
Two methods can have the same name if they have a different number of arguments,
or the specific arguments are of different variable types. For example, it might
be useful for the Virus class of objects to have two tauntUser()
methods. One could have no arguments at all and would deliver a generic taunt. The
other could specify the taunt as a string argument. The following statements could
implement these methods:
void tauntUser() {
System.out.println("The problem is not with your set, but with yourselves.");
}
void tauntUser(String taunt) {
System.out.println(taunt);
}
When you want to create an object in a program, use the new statement,
as in the following:
Virus typhoid = new Virus();
This statement creates a new Virus object called typhoid, and
it uses a special method in the Virus class called a constructor. Constructors
are methods that are used when an object is first being created. The purpose of a
constructor is to set up any variables and other things that need to be established.
The following are two constructor methods for the Virus class of objects:
public Virus() {
maxFileSize = 30000;
}
public Virus(String name, int size) {
author = name;
maxFileSize = size;
}
Like other methods, constructors can use the arguments they are sent as a way
to have more than one constructor in a class. In this example, the first constructor
would be used with a statement such as the following:
Virus mumps = new Virus();
The other constructor could be used only if a string and an integer were sent
as arguments, as in this statement:
Virus rubella = new Virus("April Mayhem", 60000);
If you only had the preceding two constructor methods, you could not use the new
statement with any other type or number of arguments within the parentheses.
Like class variables, class methods are a way to provide functionality associated
with an entire class instead of a specific object. Use a class method when the method
does nothing that affects an individual object of the class. One example that you
have used in a previous hour was the parseInt() method of the Integer
class. This method is used to convert a string to a variable of the type int,
as in the following:
int time = Integer.parseInt(timeText);
To make a method into a class method, use static in front of the method
name, as in the following:
static void showVirusCount() {
System.out.println("There are " + virusCount + " viruses.");
}
The virusCount class variable was used earlier to keep track of how many
Virus objects have been created by a program. The showVirusCount()
method is a class method that displays this total, and it should be called with a
statement such as the following:
Virus.showVirusCount();
When you create a variable or an object inside a method in one of your classes,
it is usable only inside that method. The reason for this is the concept of variable
scope. Scope is the section in which a variable exists in a program. If you go outside
of the part of the program defined by the scope, you can no longer use the variable.
The { and } statements in a program define the boundaries for
a variable. Any variable created within these marks cannot be used outside of them.
For example, consider the following statements:
if (numFiles < 1) {
String warning = "No files remaining.";
}
System.out.println(warning);
This example does not work correctly because the warning variable was
created inside the brackets of the if block statement. The variable does
not exist outside of the brackets, so the System.out.println() method cannot
use warning as an argument.
One of the areas that can lead to errors in a program is when a variable has a
different value than you expected it to have. In a large program written with many
programming languages, this area can be difficult to fix because any part of the
program might use the variable. Rules that enforce scope make programs easier to
debug because scope limits the area in which a variable can be used.
This concept applies to methods because a variable created inside a method cannot
be used in other methods. You can only use a variable in more than one method if
it was created as an object variable or class variable after the class statement
at the beginning of the program.
Because you can refer to variables and methods in other classes along with variables
and methods in your own class, it can easily become confusing. One way to make things
a little clearer is with the this statement. The this statement
is a way to refer in a program to the program's own object.
When you are using an object's methods or variables, you put the name of the object
in front of the method or variable name, separated by a period. Consider these examples:
Virus chickenpox = new Virus();
chickenpox.name = "LoveHandles";
chickenpox.setSeconds(75);
These statements create a new Virus object called chickenpox,
set the name variable of chickenpox, and then use the setSeconds()
method of chickenpox.
There are times in a program where you need to refer to the current object--in
other words, the object represented by the program itself. For example, inside the
Virus class, you might have a method that has its own variable called author:
void public checkAuthor() {
String author = null;
}
A variable called author exists within the scope of the checkAuthor()
method, but it isn't the same variable as an object variable called author.
If you wanted to refer to the current object's author variable, you have
to use the this statement, as in the following:
System.out.println(this.author);
By using this, you make it clear which variable or method you are referring
to. You can use this anywhere in a class that you would refer to an object
by name. If you wanted to send the current object as an argument in a method, for
example, you could use a statement such as the following:
verifyData(this);
In many cases, the this statement will not be needed to make it clear
that you're referring to an object's variables and methods. However, there's no detriment
to using this any time you want to be sure you're referring to the right
thing.
At the insistence of every attorney and management executive in the Macmillan
family of computer publishers, the workshop for this hour will not be the creation
of a working virus program. Instead, you'll create a simple Virus object
that can do only one thing: Count the number of Virus objects that a program
has created and report the total.
Load your word processor and create a new file called Virus.java. Enter
Listing 11.1 into the word processor and save the file when you're done.
1: public class Virus {
2: static int virusCount = 0;
3:
4: public Virus() {
5: virusCount++;
6: }
7:
8: static int getVirusCount() {
9: return virusCount;
10: }
11: }
Compile the file, and then return to your word processor. You need to create a short
program that will create Virus objects and ask the Virus class
to count them. Open up a new file and enter Listing 11.2. Save the file as VirusLook.java
when you're done.
1: class VirusLook {
2: public static void main(String arguments[]) {
3: Virus smash = new Virus();
4: Virus crash = new Virus();
5: Virus crumble = new Virus();
6: System.out.println("There are " + Virus.getVirusCount() + " 7: viruses.");
8: }
9: }
Compile the VirusLook.java file, and then run it with the java
interpreter by typing the following command:
java VirusLook
The output should be the following:
There are 3 viruses.
You now have completed two of the three hours devoted to object-oriented concepts
in this guide. You've learned how to create an object and give behavior and attributes
to the object and to its own class of objects. Thinking in terms of objects is one
of the tougher challenges of the Java programming language. Once you start to understand
it, however, you realize that the entire language makes use of objects and classes.
During the next hour, you'll learn how to give your objects parents and children.
The following questions will test whether you have the attributes and behavior
to understand object-oriented programming techniques.
If all this talk of viruses didn't make you sick, you can increase your knowledge
of this hour's topics with the following activity: | http://softlookup.com/tutorial/Java/ch11.asp | CC-MAIN-2018-13 | refinedweb | 3,442 | 60.14 |
I just discovered that the Flexget installation on my Synology NAS is broken, and reading previous threads on the issue I can sum up what the problem is and what I've tried. When running flexget --version I get:
synology> flexget --version
Traceback (most recent call last):
File "/opt/bin/flexget", line 7, in
from flexget import main
File "/opt/lib/python2.7/site-packages/flexget/__init__.py", line 11, in
from flexget import logger, plugin
File "/opt/lib/python2.7/site-packages/flexget/logger.py", line 3, in
from past.builtins import basestring
File "/opt/lib/python2.7/site-packages/past/__init__.py", line 88, in
from past.translation import install_hooks as autotranslate
File "/opt/lib/python2.7/site-packages/past/translation/__init__.py", line 41, in
from lib2to3.pgen2.parse import ParseError
ImportError: No module named lib2to3.pgen2.parse
So I commented out lines from pythons init.py files where lib2to3 was imported, since this seems to be causing the above error. This takes care of that error, and flexget --version works correctly. But when running flexget check (or execute) I instead get this error:
flexget check
2017-04-14 22:51 VERBOSE check Pre-checked 79 configuration lines
2017-04-14 22:51 INFO manager Database upgrade is required. Attempting now.
2017-04-14 22:51 ERROR schema Failed to upgrade database for plugin simple_persistence: value
Traceback (most recent call last):
File "/opt/lib/python2.7/site-packages/flexget/db_schema.py", line 150, in upgrade_wrapper
new_ver = upgrade_func(current_ver, session)
File "/opt/lib/python2.7/site-packages/flexget/utils/simple_persistence.py", line 42, in upgrade
for row in session.execute(select([table.c.id, table.c.plugin, table.c.key, table.c.value])):
File "/opt/lib/python2.7/site-packages/sqlalchemy/util/_collections.py", line 212, in __getattr__
raise AttributeError(key)
AttributeError: value
Traceback (most recent call last):
File "/opt/bin/flexget", line 11, in <module>
sys.exit(main())
File "/opt/lib/python2.7/site-packages/flexget/__init__.py", line 42, in main
manager.start()
File "/opt/lib/python2.7/site-packages/flexget/manager.py", line 326, in start
self.initialize()
File "/opt/lib/python2.7/site-packages/flexget/manager.py", line 214, in initialize
fire_event('manager.upgrade', self)
File "/opt/lib/python2.7/site-packages/flexget/event.py", line 106, in fire_event
result = event(*args, **kwargs)
File "/opt/lib/python2.7/site-packages/flexget/event.py", line 23, in __call__
return self.func(*args, **kwargs)
File "/opt/lib/python2.7/site-packages/flexget/db_schema.py", line 158, in upgrade_wrapper
manager.shutdown(finish_queue=False)
File "/opt/lib/python2.7/site-packages/flexget/manager.py", line 910, in shutdown
raise RuntimeError('Cannot shutdown manager that was never initialized.')
RuntimeError: Cannot shutdown manager that was never initialized.
I have currently installed Flexget 2.10.31 and Python 2.7.13. I tried installing python 3 via pip (I use opkg and pip for all installations according to the flexget installation guide for Synology), but Python 3 throws the same error, and opkg now installs Python 3.6 which I read in some other posts is not supported by Flexget which supports up to 3.5? I also tried downgrading to Flexget 1.2.521, as suggested in some other thread but still it won't work.
So any ideas of what I should try to get Flexget running again? I dont even know why it broke to begin with, as to my knowledge there should have been no updates downloaded for neither flexget or python on my NAS, but maybe that just slipped under my radar. | https://discuss.flexget.com/t/flexget-python-broken-on-synology/3363/1 | CC-MAIN-2018-13 | refinedweb | 596 | 60.72 |
Redundant code is code that changes at the same time for the same reason. Part of object oriented programming is limiting change to a single place. If it changes, it should change in one place.
How do µObjects encourage being nonredundant? By focusing on doing one thing.
Let's look at some specific attributes of µObjects that drive being nonredundant.
µObjects have redundancy with intent
Redundancy is intent based.
Redundancy isn't duplication.
If two things are doing the same thing for different reasons - That is not redundant. Even if the code is the same; it should be isolated and naming reflecting the intent of why it exists.
One of the big degradation of code is when you have two things using X; and one needs to do X, but a little different. That class starts to change to continue to support both uses. Booleans get passed in as flow control. Switch gets in to handle multiple cases.
That is wrong.
At this point it's clear that the modifications do not happen for the same reason. Whatever is changing should have its own version of an object doing this behavior; it changes for different reasons. The intent of the code is different; it's not redundant, it should be separated.
µObjects identify identical change
With µObjects it's easy to see if objects are changing for the same reason. You need to change one; you'll find near identical ones changing as well. There's not a lot going on. There's not a lot of difficulty in combining µObjects into an object that encapsulates that redundant behavior.
With µObjects; most of the time; if you need to change an object - You're writing a new one. It's hard to change something when it's doing one thing. If you change what it does... you probably need a new class for that new behavior.
µObjects don't repeat themselves
µObjects strongly discourage the DRY principle. While DRY is not itself redundant code; it can lead to redundant code.
µObjects do one thing. If you have two µObjects that do the same thing - You are by definition redundant with those two classes.
With a class that all it does is join strings
public class JoinText { private readonly string _seperator; private readonly string[] _args; public JoinText(string seperator, string[] args) { _seperator = seperator; _args = args; } public string Value() => string.Join(_seperator, _args); }
and we need to change it... What's there to change? There's nothing to change. With an object doing just one thing, not one job just one thing, you get objects that won't need to change much. Changes that do happen often show the need for more objects.
Changes lead to new objects which do that one new thing.
µObjects don't like to have the same thing done in multiple places; it's a huge smell and should get refactored into a new µObject.
µObject with redundancy is doing too much
If there are changes happening in a class the question must be asked, "Is this doing too much?"
Normally the answer is yes. The only subsequent change should be to refactor out behavior into a new µObject that is a new dependency. This oft leads to new refactors of encapsulating relationships. The relationship does one thing between the two objects. It exposes one behavior for the consumer.
Take whatever apparent redundant code and create a new object for it. With µObjects, repeated code is a smell; not redundancy, but if it exists in two places; it should be encapsulated in a µObject.
µObject redundancy is clear
µObjects stand out as odd when they have redundancy. They will be doing multiple things; not the one thing they should be doing.
Multiline methods are a smell. Not always reducable; but a smell. 7 lines? I expect 3 objects, or more, from that refactor.
There are not a lot of ways to do the one thing. There are a lot of ways to do something when you can do a lot of things in an object. With µObjects; you only do one thing. There's not a lot of ways to do the same one thing. Redundant code becomes very obvious.
Redundant code becomes very clear when you see an object being used the same way in multiple places instead of being asked to do something.
µObjects do one thing
As has been said a few times, µObjects do one thing. If it does two things; that's a violation of the µObject development practices.
What this leads to one of those things never becoming its own µObject and having to be re-written elsewhere, many times.
Taking a refactor from my Pizza Shop example.
I had a class to replace the last instance of a string. This was used to join a list and replace the last "," with "and".
public class ReplaceLastOfText : IText { private const int NotFound = -1; private readonly IText _source; private readonly IText _target; private readonly IText _replace; public ReplaceLastOfText(IText source, IText target, IText replace) { _source = source; _target = target; _replace = replace; } public string String() { string source = _source.String(); string target = _target.String(); int place = source.LastIndexOf(target, StringComparison.Ordinal); return place == NotFound ? source : source.Remove(place, target.Length).Insert(place, _replace.String()); } }
The string method is doing a few things. This is not an unreasonable class for procedural programming style; or most developers.
If I ever had to
Remove or
Insert; I need to write that again. I need to do transformations, get values... There's a lot that goes into being able to remove and insert.
This class is not doing one thing. It does a lot.
This same behavior can be achieved by creating and combining a lot of classes doing a single thing; a lot of µObjects.
public class ReplaceLastOfText : IText { private readonly IText _origin; public ReplaceLastOfText(IText source, IText target, IText replace) : this(new InsertText(new RemoveText(source, target), new LastIndexOf(source, target), replace)) { } public ReplaceLastOfText(IText text) => _origin = text; public string String() => _origin.String(); }
Here's the refactored class; achieving the same behavior. It does one thing. In this case; it knows how to combine the µObjects to do accomplish that one thing.
The
InsertText,
RemoveText, and
LastIndexOf classes are available at the github links.
RemoveText uses another class
LengthOf. It also uses
LastIndexOf.
RemoveText relies on information about the last index and length of inputs. As an object; I don't want to rely on someone telling me the correct data... Let me do that myself. I then have no concerns about the validity of the results. Were they correct to the target provided? Using the Dependency Constructor allows us to remove any of those questions from out µObject.
It has one job; Remove Text. It doesn't also have to do input validation.
I've covered it a bit in No Nulls; Input validation is removed for all but the edges of our application.
We don't want to introduce uncertainity and add redundancy by having to check inputs.
That'd be a hard redundancy to refactor away - We've designed it away.
µObjects do one thing - There won't be redundant code.
Summary
The simplest way for µObjects to maintain nonredundancy is to do one thing. If two things do the same thing; they will change for the same reason; make them one thing. | https://quinngil.com/2018/02/18/uobjects-being-nonredundant/ | CC-MAIN-2018-47 | refinedweb | 1,228 | 67.45 |
table of contents
NAME¶
memfill - fill memory area with pattern
SYNOPSIS¶
#include <publib.h> void *memfill(void *buf, size_t size, const void *pat, size_t patsize);
DESCRIPTION¶
memfill copies consecutive bytes from the pattern pat to consecutive bytes in the memory area buf, wrapping around in pat when its end is reached. patsize is the size of the pattern, size is the size of the memory area. The pattern and the memory area must not be overlapping.
RETURN VALUE¶
memfill returns its first argument.
EXAMPLE¶
To initialize an integer array one might do the following.
int temp, array[1024]; temp = 1234; memfill(array, sizeof(array), &temp, sizeof(temp));
SEE ALSO¶
AUTHOR¶
Lars Wirzenius (lars.wirzenius@helsinki.fi) | https://manpages.debian.org/testing/publib-dev/memfill.3pub.en.html | CC-MAIN-2022-05 | refinedweb | 116 | 55.95 |
Created on 2008-02-20 21:09 by zanella, last changed 2008-02-23 19:56 by benjamin.peterson. This issue is now closed.
Queue.Queue(), accepts any value as the maxsize, example:
foo = Queue.Queue('j');
l = []; foo = Queue.Queue(l);
...
Shouldn't the value passed be checked on init :
isinstance(maxsize, int) ?
This is probably a matter of style. For the most part, library code
avoids isinstance() checks and lets the errors surface downstream.
If a check gets added, we should probably also check that the value is
non-negative. The checks should not be backported because it could
break Queue subclasses that rely on being able to pass in a non-int
value (like None or like a dynamic object that allows the maxsize to be
increased during the run).
I'm unclear why this got classified as a security issue rather than just
an RFE for Py2.6.
Is there a problem with a dynamic object that changes the maxsize? I
think it might be better to ignore it, and let the client get exceptions
when their maxsize is compared.
Firts: the security type was my error.
The method wich uses the maxsize:
"""
# Check whether the queue is full
def _full(self):
return self.maxsize > 0 and len(self.queue) == self.maxsize
"""
@rhettinger: As per the documentation, negative values result on an
infinite Queue; well that AND will never be fulfilled with a negative
value anyway;
@gutworth: What I mean is that's "awkward", if you put an string for
example, it'll be the size of the string wich will be used on the
__cmp__ and on len(), but that's not explicit, or is it?
Example:
[zan@tails ~]$ python
Python 2.5.1 (r251:54863, Oct 30 2007, 13:54:11)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class C:
... def __init__(self): pass;
...
>>> c = C()
>>> import Queue
>>> a = Queue.Queue(c)
>>> len(c)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: C instance has no attribute '__len__'
>>> a = Queue.Queue(c)
>>> a.put('q')
>>> a.get()
'q'
>>> a.put(1)
>>> a.put(2)
>>> a.put(3)
>>>
Rafael, I agree that it's awkward, and I'm not against restricting the
maxsize to just something sane. However, I'm worried this (Raymond's
patch) will be too restrictive by not allowing dynamic changing of
maxsize. (Of course, you could just change the maxsize attribute of the
Queue, but that would require holding the mutex, too.)
What about requiring maxsize to be convertible to an int?
This would allow dynamic objects, if they define an __int__ method.
I join a patch.
I like it.
@gutworth: Since one of the main uses of Queue is with threads, I think
it *really* should acquire the mutex before changing the maxsize;
@amaury.forgeotdarc: Your patch makes the point of allowing the size to
be changed at some other place (e.g.: an attribute of an instance passed
as the maxsize), but as stated above I think (am not an expert) the
mutex really should be held before the maxsize change, another point:
using an instance on your patch a call to Queue.maxsize would return
something like:
"""
Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32)
[GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class C:
... def __init(self): self.n = 1;
... def __int__(self): return int(self.n);
...
>>> c = C()
>>> import Queue
>>> q = Queue.Queue(c)
>>> q.maxsize
<__main__.C instance at 0xb7c341ac>
>>>
"""
wich would force to have a get() to the maxsize attribute;
I have added a diff;
Mine patch doesn't address the "hold the mutex before changing the
maxsize" guess it would then force a get()?
It's probably best to close this as "won't fix". Each of the patches
limits the module or complicates it a bit. I'm not sure there's even a
real problem here. My preference is to leave this code untouched.
Maybe, we should change change the constructor to use self.maxsize =
int(maxsize). Then we can provide a set_maxsize method that will acquire
the mutex and preform the change. This is will restrict the type of
maxsize and allow for easy dynamic changing.
Recommend closing this. No need to muck-up a clean module to solve a
non-problem.
Raymond, are you referring to the int checking, my new method, or both?
Both. There is no issue here worth adding a new method.
Ok. I agree that we shouldn't muddy the waters of Queue by checking for
int. (set_maxsize would be unneeded) Go ahead and close.
For what it's worth, I do think this is an issue. As it currently
stands, not only does the module silently accept invalid values, but the
mutex issue exists (and is also silently ignored) if an object returning
dynamic values is passed as maxsize. IMHO, the waters are already muddy,
it's just that the mud is blue so everything seems alright :)
I think zanella's patch is the way to go, even if it requires adding a
setter method.
Many places in the stdlib accept values which are not valid. I believe
this is because the library trusts you to do the right thing in the name
of performance and cleaner, simpler code. IMO, adding a set_maxsize
method wouldn't be a sin, but Raymond (who is I'm sure wiser than me)
disagrees.
Just to exemplify:
"""
from threading import Thread
import time
import Queue
class C:
def __int__(self):
return 3
#def __del__(self): print "collected..." # won't happen since q holds
a reference to it
c = C()
q = Queue.Queue(c)
# Not dynamic
print "maxsize: ", q.maxsize
# Not full() with instance
print c > 0
print len(q.queue) == q.maxsize
class T(Thread):
def __init__(self, q):
self._q = q
Thread.__init__(self)
def run(self):
#For sme bizarre motive
self._q.maxsize = 5
#Ends up being infinite most of the times
t = T(q)
for i in xrange(1000):
q.put_nowait(i)
if i == 1: # otherwise the "and len(self.queue) == self.maxsize" will fail
t.start()
time.sleep(1)
t.join()
"""
I guess rhettinger is right, there's no issue here, anyone that decides
to change the maxsize afterwards should know what is doing.
The only "possible" problem I'm able to see is someone passing an object
wich has __int__() and expecting it to be used.
> The only "possible" problem I'm able to see is someone passing an object
wich has __int__() and expecting it to be used.
They should be explicit and say Queue(int(object_with__int__)). | http://bugs.python.org/issue2149 | crawl-003 | refinedweb | 1,132 | 74.79 |
Anyway, I did solved my problems regarding last post.
But now i get some errors when I try to compile my program in "Relese Mode". In "Debug Mode" all went fine, but when I try to do it in "Relese" i get these errors :
[code:157s74zm]
Error 1 error LNK2001: unresolved external symbol _FSOUND_Init@12 main.obj
Error 2 error LNK2001: unresolved external symbol _FMUSIC_PlaySong@4 main.obj
Error 3 error LNK2001: unresolved external symbol _FMUSIC_LoadSong@4 main.obj
Error 4 fatal error LNK1120: 3 unresolved externals D:\My Documents\Visual Studio 2005\Projects\New\Release\New.exe 1
[/code:157s74zm]
I’m using Microsoft Visual Studio 2005 SP2 – and i did link fmodvc.lib in my project.
This is the code
[code:157s74zm]
include <iostream>
include <fmod.h>
using namespace std;
int main () {
cout << " Playing GC.it . . . . " << endl;
FMUSIC_MODULE *pjesma = NULL;
FSOUND_Init (44000,32,0);
pjesma = FMUSIC_LoadSong ("GC.it");
FMUSIC_PlaySong(pjesma);
system("pause");
return 0;
}
[/code:157s74zm]
Thanks!!!
- ReiKo asked 11 years ago
- You must login to post comments
You often have to specify the link libs in both release and debug (it allows you to link different libs in each build), check that just in case.
- a1psx answered 11 years ago
Yup, that’s it. Thanks.
- ReiKo answered 11 years ago | http://www.fmod.org/questions/question/forum-25047/ | CC-MAIN-2018-34 | refinedweb | 213 | 67.35 |
There are many ways to communicate from one form to another in a Windows
Forms .NET application. However, when using MDI (Multiple Document Interface)
mode, you may often find that the wiring up of simple events may better
suit your needs. This short article illustrates one of the simplest techniques
for having a Child MDI form call back to its parent Container Form and
even how to pass an object instance (which could contain an entire class,
if you like) back to the Parent.
Fire up Visual Studio .NET and create a new Windows Application project.
In the Properties window with your Default Form1 selected, change the
IsMDIContainer property to "true". Add a Panel at the top, and add a
button and a label to it, with the Panel taking up, say, the top 20%
of the form's designer surface.
Now add a new form to the project.
Let's call it MDIChild. On the Kid form, add a button which we will
use to call the event to the Daddy form and also close the Kid.
At this point, your Daddy form should look something like this:
Now lets wire up our event and event handlers, starting with the Kid:
First we need a custom EventArgs derived class so we can pass the
information we need:
using
namespace
In our Codebehind for the Kid form, lets add our event and eventhandler,
and make the call in our Button1_Click event:
// declare the EventHandler
}
Note that we are passing "Howdy Pop" in the
EventArgs parameter. This can really be any business
logic you want. If your Kid form created a DataSet that you needed for
the Daddy to receive, you would plug it in here. All Daddy would need
to know is that he is expecting a DataSet from the Kid in the EventArts
parameter.
Now, in our Daddy Form, we are going to wire up everything
we need to show the kid and also to receive his event message:
private void button1_Click(object sender, System.EventArgs
e)
{
// declare and instantiate the Kid
MDIChild chForm = new MDIChild();
//set parent form for the child window
chForm.MdiParent=this;
// make it fill the container
chForm.WindowState=FormWindowState.Maximized;
// add the event handler we wired up
chForm.NotifyDaddy+=new EventHandler(chForm_NotifyDaddy);
// show the Kid
chForm.Show();
}
private
Note that we create chForm.NotifyDaddy and add chForm_NotifyDaddy
to the Delegates. Then in the handler itself, we write out the message
including the value of the sender (in this case it is just some text).
Your result when the Kid form is closed should look something like this:
And that's all it takes to teach all your kids to call home to Daddy!
The full solution is ready to go at the link below.
Download the Source Code that accompanies this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave | http://www.eggheadcafe.com/articles/20040229.asp | crawl-002 | refinedweb | 478 | 69.62 |
/************************************************* * _ # ifdef HAVE_CONFIG_H # include "config.h" # endif # include "pcre_internal.h" #endif /************************************************* * Create PCRE character tables * *************************************************/ /* This function builds a set of character tables for use by PCRE and returns a pointer to them. They are build using the ctype functions, and consequently their contents will depend upon the current locale setting. When compiled as part of the library, the store is obtained via pcre_malloc(), but when compiled inside dftables, use malloc(). Arguments: none Returns: pointer to the contiguous block of data */ const unsigned char * pcre_maketables(void) { unsigned char *yield, *p; int i; #ifndef DFTABLES yield = (unsigned char*)(pcre_malloc)(tables_length); #else yield = (unsigned char*)malloc(tables_length); #endif if (yield == NULL) return NULL; p = yield; /* First comes the lower casing table */ for (i = 0; i < 256; i++) *p++ = tolower(i); /* Next the case-flipping table */ for (i = 0; i < 256; i++) *p++ = islower(i)? toupper(i) : tolower(i); /* Then the character class tables. Don't try to be clever and save effort on exclusive ones - in some locales things may be different.)) p[cbit_word + i/8] |= 1 << (i&7); if (i == '_') p[cbit_word + i/8] |= 1 << (i&7); if (isspace(i)) p[cbit_space + i/8] |= 1 << (i&7); if (isxdigit(i))p[cbit_xdigit + i/8] |= 1 << (i&7); if (isgraph(i)) p[cbit_graph + i/8] |= 1 << (i&7); if (isprint(i)) p[cbit_print + i/8] |= 1 << (i&7); if (ispunct(i)) p[cbit_punct + i/8] |= 1 << (i&7); if (iscntrl(i)) p[cbit_cntrl + i/8] |= 1 << (i&7); } p += cbit_length; /* Finally, the character type table. */ | http://opensource.apple.com/source/pcre/pcre-6/pcre/pcre_maketables.c | CC-MAIN-2013-20 | refinedweb | 254 | 56.29 |
Parallel Programming in C# 4.0 using Visual Studio 2010
March 24, 2011 Leave a comment
Framework in Visual Studio 2010 has been more enhanced and visual studio IDE itself have got overhauled a bit. Well, I’m not going to give you a list of ALL features – it’s been blogged already around the world. Better Google it or Bing it with “VS2010+Features”
However, few notable features that caught my eyes are “Parallel Programming”, “F# – Functional Programming”, “Velocity – Distributed Caching”, “Azure Tools” and more important of all the evolving Team system.
But I first wanted to dirt my hand with Parallel Computing, because if you are a computer science student – well, you would be more excited about this than others.
Remember the big pillow sized books that we used to read to make this work? Well, things have changed and world have shrunk already. Though I cannot explain all the nitty gritty of parallel programming I will try this to explain in LAY MAN Terms.
Well, during the Stone Age [!] – Most of the computers in the world had only ONE Processors, except those big beasty servers which are always locked up in rooms with high security (well, usually *nix or Solaris servers) – these beasty servers used to manage most of the corporations. These servers had multiple processors and it took huge efforts to write software’s and manage them.
Welcome to the modern world – Every household and every laptop being sold these days at least have two or more processors.
Now – that has posed us a BIG Question? Hardwares have evolved, but has our software evolved to execute on multiple processors? – The answer is NO. At least not in the mainstream programming world – let’s say for example what would happen
- If we execute a simple FOR Loop
- That would call a service (that takes a longer time)
- … and execute sequentially for N Times
On a single processor this is acceptable and we might use threads to increase the efficiency.
Is this still acceptable on a multiple processors? The answer is no. Fine, but how do we get efficiency without the hurdles of running and managing too many threads? Shouldn’t there be an easier way out for this?
Alrighty, without much ado, let me show you how easy(!) this is and a little insight on what happens behind the scenes. Let’s churn out a quick code here based on the same questions we have. Let us say a real long process (Well it could be about counting the stars in the UniverseJ, huh) and let us say you want to do this N times.
In our quest to count all the stars in the universe, let’s first create a data structure for the star and add to the universe, and let us use the good ol` mother of all loops the “FOR” Loop, and see how much inefficient this loop has become these modern days!!
“The Sequential execution took almost 30 seconds in my Dual Core Computer.”
And here is the Parallel Computing version of the same method. Yes, the for loop has been replaced with Parallel.For a new entry in System.Threading namespace.
How simpler can this get to?
VOILA! The Parallel execution took Just 3 Seconds in my Dual Core Computer.
Well, That’s a significant performance improvement without Hardware Scale-out or Scale-up, all we are doing is using the existing hardware resource efficiently. So much to a FOR Loop J, Huh. 30 Seconds of execution have become 3 seconds instantly. Look closer to the screenshot – the stars are not counted sequentially, instead it allocates the task to the available CPU in parallel.
Because the loop is run in parallel, each iteration is scheduled and run individually on whatever core is available. This means that the list is not necessarily processed in order, which can drastically impact your code. You should design your code so that each iteration of the loop is completely independent from the others. Any single iteration should not rely on another in order to complete correctly.
Let us catch up more on the insights soon on next part of the same series…
source : | https://alamzyah.wordpress.com/2011/03/24/parallel-programming-in-c-4-0-using-visual-studio-2010/ | CC-MAIN-2019-04 | refinedweb | 697 | 71.04 |
The importance of extremes
Tuesday 18 December 2012 21:01
When exploring unfamiliar ideas, the best approach is often to take them to the extreme. For instance, suppose you're trying to follow the principle "tell, don't ask". I've often found it tricky to know where to draw the line, but as an exercise, try writing your code without a single getter or setter. This may seem ludicrous, but by throwing pragmatism completely out the window, you're forced to move outside your comfort zone. While some the code might be awful, some of it might present ideas in a new way.
As an example, suppose I have two coordinates which represent the top-left and bottom-right corners of a rectangle, and I want to iterate through every integer coordinate in that rectangle. My first thought might be:
def find_coordinates_in_rectangle(top_left, bottom_right): for x in range(top_left.x - 1, bottom_right.x + 2): for y in range(top_left.y - 1, bottom_right.y + 2): yield Coordinate(x, y)
Normally, I might be perfectly happy with this code (although there is a bit
of duplication!) But if we've forbidden getters or setters, then we can't
retrieve the
x and
y values from each coordinate.
Instead, we can write something like:
def find_coordinates_in_rectangle(top_left, bottom_right): return top_left.all_coordinates_in_rectangle_to(bottom_right)
The method name needs a bit more thought, but the important difference is that we've moved some of the knowledge of our coordinate system into the actual coordinate class. Whether or not this turns out to be a good idea, it's food for thought that we might not have come across without such a severe constraint as "no getters or setters".
Topics: Software design | http://mike.zwobble.org/2012/12/the-important-of-extremes/ | CC-MAIN-2018-17 | refinedweb | 286 | 63.19 |
Issues
Show Unassigned
Show All
Search
Lost your login?
Roundup docs
The
bug tracker for setuptools 0.7 or higher is on
BitBucket
Created on 2011-02-08.13:32:34 by yegor256, last changed 2011-03-23.20:42:50 by pje.
Version with this fix is now available at
Thanks.
yes, right
So, you tried making that change and it actually works, then?
Yes, this is exactly what I was looking for!
Try replacing the _download_svn routine in the setuptools.package_index module with this code, and let me know if it does what you want:
def _download_svn(self, url, filename):
url = url.split('#',1)[0] # remove any fragment for svn's sake
creds = ''
if url.lower().startswith('svn:') and '@' in url:
scheme, netloc, path, p, q, f = urlparse.urlparse(url)
if not netloc and path.startswith('//') and '/' in path[2:]:
netloc, path = path[2:].split('/',1)
auth, host = urllib.splituser(netloc)
if auth:
if ':' in auth:
user, pw = auth.split(':',1)
creds = " --username=%s --password=%s" % (user, pw)
else:
creds = " --username="+auth
netloc = host
url = urlparse.urlunparse((scheme, netloc, url, p, q, f))
self.info("Doing subversion checkout from %s to %s", url, filename)
os.system("svn checkout%s -q %s %s" % (creds, url, filename))
return filename
Sure, sorry for miscommunication, my fault. The command line looks like this:
svn checkout svn://svn.example.com/repo/trunk/my-egg --username=user --password=secret
I'm not clear on what you're saying. The credentials are already in the URL, are they not? So easy_install doesn't need any extra options.
The question I've been asking is, how do you pass these credentials to the "svn" command line? Can you give me an example of an "svn checkout" command line that does what you want?
How about adding additional options for easy_install? Something like:
easy_install --svn-user=user --svn-password=secret svn://svn.example.com/repo/trunk/my-egg
Looks good for me, what do you think?
That's what I'm asking you. If "svn" does not handle passwords in an SVN URL, there is no way that easy_install can change that. If you know a way to make svn handle it, then easy_install could perhaps be changed.
If there is no such way, however, then easy_install cannot do this, and you will have to enter the password when prompted. (Another alternative is to serve your Subversion repository using HTTP or SSH instead of the svn:// protocol.)
Can you please give an example? How should it work?
If you want this to change, you'll need to supply a command line that you can type to make svn NOT prompt for the password. Easy_install relies on the "svn" command to do the actual checkout.
It prompts for the password. Everything works fine when it is http:// protocol, instead of svn://
Subversion 1.6.
If you do "svn co
svn://user:secret@svn.example.com/repo/trunk/my-egg", does that work,
or does it prompt you for the password?
My guess is that svn itself is the problem here, because easy_install
does not remove password information from the URL, it just passes the
URL as-is to the "svn" command.://` | http://bugs.python.org/setuptools/issue121 | CC-MAIN-2015-18 | refinedweb | 533 | 77.64 |
I..
Join the conversationAdd Comment
Or if you want to automate what Ian's written in the Blog you can do something like this in Powershell.
$computer = "LocalHost"
$namespace = "rootccmStateMsg"
Get-WmiObject -class CCM_StateMsg -computername $computer -namespace $namespace | Where-Object {$_.TopicType -eq "500" –and $_.StateID -eq "0"}
I have 2000 + clients with this status, should i do this on each and every client. Is there a better way to move these clients from status unknown to something else.
Hello, we have a server where it wont update. I have pushed the sccm client to the server a couple of times and the ccmset update log does not move. So I opened up the statemessage.log and I see the following erro Cstatemessage::updatemsg – failed to open
statemsg namespace ; state message(state id : 2) with topictype 1300 and topicid 2 has been recorded for the system. So i started up wbemtest and used "rootccmStateMsg". i get the following error
number : 0x8004100e
facility ; WMI
Description:invalid namespace
do i need to rebuild WMI ?
The namespace should have \ in there: root\ccm\StateMsg (as per the WMI query box in the screenshots)
Regards
John
@Goce Dimitroski Yes, remove the SCCM Client then in PowerShell use gwmi -query "SELECT * FROM __Namespace WHERE Name='CCM'" -Namespace "root" | Remove-WmiObject #To remove WMI CCM and finally reboot the system. Then install the SCCM client and allow the
system to process everything for at least a few hours. Keep an eye on C:WindowsccmsetupLogsclient.msi.log during the install. | https://blogs.technet.microsoft.com/configmgrdogs/2013/11/06/software-update-compliance-reports-detection-state-unknown/ | CC-MAIN-2017-47 | refinedweb | 254 | 63.8 |
resetty, savetty, getsyx, setsyx, ripoffline, curs_set, napms - low-
level curses routines
SYNOPSIS
#include <curses.h>
int);
DESCRIPTION slk(3NCURSES)] uses to reduce the size of the screen.
ripoffline must be called before initscr or newterm is called. If line.-
rently.
PORTABILITY
The functions setsyx and getsyx are not described in the XSI Curses
standard, Issue 4. All other functions are as described in XSI Curses.
The SVr4 documentation describes setsyx and getsyx as having return
type int. This is misleading, as they are macros with no documented se-
mantics for the return value.
SEE ALSO
ncurses(3NCURSES), initscr(3NCURSES), outopts(3NCURSES), re-
fresh(3NCURSES), scr_dump(3NCURSES), slk(3NCURSES) | http://www.linux-directory.com/man3/getsyx.shtml | crawl-003 | refinedweb | 108 | 50.33 |
In my current project I have need for quite a bit of logging. I’ve never used log4net before so I thought it was about time. Log4net has been around for ages, and is very well documented in the log4net documentation, and explained very nicely in e.g. Jim Christopher’s log4net tutorials. What I plan to show you here is my findings in setting up my logging solution. What I wanted was couple of different log files for different severity levels, and here’s my story how I did it. I’m sure you can do more clever tricks that what I’ve done here, please contact me or share your knowledge in the comments if you have tips or thoughts on improvements.
Summary
I used…
- FileAppender+MinimalLock to be able to read the log file while the system is running
- three Appenders one each for minimum LogLevel: debug, info and warning
- one logger per class – gave me a readable “tag” on each log item, but filtering problems with nhibernate
- an LoggerMatchFilter for nhibernate with AcceptOnMatch=false to reject messages
Getting started
To get any logging at all, you need to create a logger and configure it. The code for doing this and writing something to the log looks like this:
log4net.Config.XmlConfigurator.Configure(); var log = log4net.LogManager.GetLogger(this.GetType()); log.Debug(“debug message”);
(Yes, it’s not a typo – the namespace is lowercase, I know it’s hard, by try to ignore that…)
Those few lines of course require configuration, the app/web.config is a good place for that so let’s go there now. The config needs three parts; a configSections entry, a log4net appender and a log4net root directive. See for yourself:
Appender
The appender entry represents a consumer of the logging information. Here I use a RollingFileAppender which rotates the log file periodically, as specified by the datePattern. MaximumFileSize is also honored and breaking that limit also rolls the log file over.
The logging in my project is done from a WCF service with HTTP endpoints. I have testing endpoints (e.g. /log/debug) that just shows the latest log file – very handy. To be able to read from the log file while the service is running, the lockingModel has to be set to FileAppender+MinimalLock which tells log4net to lock the log file as little as possible, instead of all the time which is the default. This will create a performance hit for the logging, but that’s ok for my setup. Specifying a staticLogFileName causes log4net to always log to a specific file, and then rename older log files according to date etc. This allows my hand coded log reader to always know where the latest log info is.
Root
The root section is what connects a log consumer (the appender) with a log producer (the ILog in the code). The root section can be replaced by specifying a specific logger (
Filtering (and getting rid of nhibernate output)
As you probably know, logs can either say to little, just about right, or WAY too much – that’s where the power of filtering comes in. LogLevel is the most natural and direct thing to filter on. In my service, I have three different log files all set to different levels; one for everything, one for at least Info and one for at least Warning. This is done by having three different appender entries and three appender rows in the root entry. This is where I started bumping into problems, read on…
One logger per class
I followed the advice from Jim’s log4net recommended practices to create one logger per class – not to use one logger for the entire application. The theory behind this is to have a tag on each log entry allowing you to have a log that “reads like a novel”. I like the idea and it’s very nice to always know what class issued the statement. Here’s an example of log output:
10:24:57,830 [10] INFO Jayway.ProjectX.Service - ping called 10:24:57,833 [10] WARN Jayway.ProjectX.Proxy - ping called 10:28:03,084 [10] INFO Jayway.ProjectX.RequestCreator - creating request 10:28:03,095 [10] INFO Jayway.ProjectX.Proxy - sending request 10:28:03,097 [10] WARN Jayway.ProjectX.Proxy - could not reach host 10:28:03,907 [10] INFO Jayway.Common.Cache - getting items from cache 10:28:03,910 [10] INFO Jayway.ProjectX.Proxy - returning cache items 10:28:28,816 [10] INFO Jayway.ProjectX.Service - returning 100 items
I made this happen by using a logFactory that I inject into each class, instead of a log. (I’m not using an IoC container, was overkill for this project). The web service class looks like this:
private readonly ILogFactory _logFactory; private readonly ILog _log; public WebService() { _logFactory = new LogFactoryLog4Net(); _log = _logFactory.Create(this); }
whilst one of the classes begins with this:
private readonly ILog _log; public Proxy(ILogFactory logFactory) { _log = logFactory.Create(this); }
Using this technique, it would be very easy to temporarily add an appender for just a part of your system – giving you awesome control of the logging output.
nhibernate
NHibernate also uses log4net – and since I don’t have one logger but one per class, I have to use the “root” logger directive. The root logger funnel everything sent to the log4net system. This caused me to get all the nhibernate debug info in my logs. This was not what I wanted.
Inverted filtering
To get the kind of filtering output I wanted I had to use two kinds of filters; the LoggerMatchFilter and the LevelRangeFilter. Filtering in general in log4net works so that if you hit a filter that matches, it breaks the filtering chain and returns. This took me a while to figure out, I thought it just acted as a filter that could be chained, so that I could specify more and more filters to get more and more fine-grained output. But, back to the code:
As you can see, I use the LoggerMatchFilter inverted. So, when it hits an item that matches “NHibernate” it will immediately reject it. If it does not match, it will continue down the chain and check the level. I figured this out by reading Matthew Daugherty’s log4net tutorial, highly recommended.
This Post Has 9 Comments
Here’s a nice snippet you can use in Visual Studio for creating static logging instances in your classes:
Just install the snippet and then type “log4net” and press tab twice.
I would recommend against a filter setup. It is much more efficient to have a logger element for nHibernate and configure it to Warn level.
With the filter approach all off nHibernate’s logging will be created and then rejected by your filter (bad for performance).
With a logger config only the statements with warn or above will be generated by nHibernate.
@Konstantin thank you for the tip! I haven’t really grasped the entire log4net system so this kind of input is perfect. But I need a “root” logger to get output from all my loggers right? With “
sorry, the other reply got clipped (I wrote it on my cell phone).
Nissim levy
14 Mar 2012
-
Andreas Hammar
14 Mar 2012
-
Ryan
15 Oct 2013
-
Andreas Hammar
16 Oct 2013
-
Ashan Ratnayake
12 Apr 2016
the last part of the comment:
I need a root logger right? With a.
Hi Nissim,
I’m sorry but I’m not that deep into log4net to be able to answer your question.
My best bet would be to ask stackoverflow. This question seems to be pretty close to what you’re trying to do:
Good luck!
The RollingFileAppender code does not write to the file.
Probably a flush thing with a newer version, check the docs.
thanks for sharing your knowledge :) | https://blog.jayway.com/2011/06/13/a-nice-basic-log4net-setup/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+jayway%2Fposts+%28Jayway+Team+Blog+-+Posts%29 | CC-MAIN-2021-25 | refinedweb | 1,311 | 62.38 |
table of contents
NAME¶libzeep - A C++ library for XML parsing, XPath, SOAP servers and web apps
SYNOPSIS¶#include <zeep/server.hpp>
class my_server : public zeep::server
{ public: my_server(const char* addr, short port); void sum(int a, int b, int& c) { c = a + b; }
};
my_server::my_server(const char* addr, short port)
: zeep::server("http//...", addr, port)
{ const char kSumParameterNames[] = { "a", "b", "c" }; register_action("sum", this, &my_server::sum, kSumParameterNames);
}
...
int main()
{ my_server server("0.0.0.0", 10333); boost::thread t(boost::bind(&my_server::run, &server)); // and wait for a signal to stop using e.g. sigwait(3) ...
}
NOTE¶See HTML pages for more up-to-date documentation. E.g. at
DESCRIPTION¶Using libzeep you can create a SOAP server by deriving from zeep::server. In the constructor of your server, you call the base class constructor with three arguments: ns, a namespace for your SOAP server, address, the address to listen to, usually "0.0.0.0" to listen to all available addresses. And port, the port number to bind to.
SOAP actions are simply members of the server object and are registered using the register_action member function of the zeep::server base class. After initializing the server object, the run member is called and the server then starts listening to the address and port specified.
The resulting web service application will process incoming request. There are three kinds of requests, the server can return an automatically generated WSDL, it can process standard SOAP message send in SOAP envelopes and it can handle REST style requests which are mapped to corresponding SOAP messages internally.
The signature of the registered actions are used to generate all the code needed to serialize and deserialize SOAP envelopes and to create a corresponding WSDL file. The signature can be as simple as the example above but can also be as complex as in this one:
void myAction(
const std::vector<MyStructIn>& input,
MyStructOut& output);
In order to make this work, you have to notify the library of the mapping of your structure type to a name using the macro SOAP_XML_SET_STRUCT_NAME like this:
SOAP_XML_SET_STRUCT_NAME(MyStructIn);
SOAP_XML_SET_STRUCT_NAME(MyStructOut);
Next to this, you have to provide a way to serialize and deserialize your structure. For this, libzeep uses the same mechanism as the Boost::serialize library, which means you have to add a templated member function called serialize to your structure. The result will look like this:
struct MyStructIn { string myField1; int myField2;
template<class Archive>
void serialize(Archive& ar, const unsigned int version)
{
ar & BOOST_SERIALIZATION_NVP(myField1)
& BOOST_SERIALIZATION_NVP(myField2);
}
};
Similarly you can use enum's in an action signature or as structure member variables. Again we need to tell the library the type name for the enum and the possible enum values. We do this using the SOAP_XML_ADD_ENUM macro, like this:
enum MyEnum { "one", "two" }; SOAP_XML_ADD_ENUM(myEnum, one); SOAP_XML_ADD_ENUM(myEnum, two);
As shown above, you can also use std::vector containers in the signature of actions. Support for other STL containers is not implemented yet.
If the address used by clients of your server is different from the address of your local machine (which can happen if you're behind a reverse proxy e.g.) you can specify the location using the set_location member function of zeep::server. The specified address will then be used in the WSDL file. | https://manpages.debian.org/experimental/libzeep-dev/libzeep.3.en.html | CC-MAIN-2021-31 | refinedweb | 553 | 50.16 |
NAME
sigwait - select a set of signals
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <signal.h> int sigwait(const sigset_t * restrict set, int * restrict sig);
DESCRIPTION
The sigwait() system call selects a set of signals, specified by set. If none of the selected signals are pending, sigwait() waits until one or more of the selected signals has been generated. Then sigwait() atomically clears one of the selected signals from the set of pending signals for the process and sets the location pointed to by sig to the signal number that was cleared. The signals specified by set should be blocked at the time of the call to sigwait(). If more than one thread is using sigwait() to wait for the same signal, no more than one of these threads will”). | http://manpages.ubuntu.com/manpages/jaunty/man2/sigwait.2freebsd.html | CC-MAIN-2015-27 | refinedweb | 131 | 66.57 |
HTTP status codes are how the web returns errors from the server to the web browser. If there is an error from your upstream website, the error is simply forwarded by the Moovweb servers to the browser. If there is an error from the Moovweb servers (either in the Moovweb Cloud or in your Moovweb project) Moovweb generates a 53x HTTP status code. When users encounter HTTP errors most often the error comes directly from your upstream host. If the HTTP status code appears in the table below, then the error came from the Moovweb Cloud. The Moovweb error can be in your project code or it can come from Moovweb servers. Refer to the table below for more details on the type and source of the error.
When running a Moovweb project locally, errors in code are shown to the developer — through the logs, in the browser window, and in the command line.
On the cloud, exceptions are handled slightly differently. For 534 errors, the page is perfect proxied. For non-534 errors, an error message is sent to the user. The error message will have a token that can be used for easy access to the stack trace of the error.
The error message and stack is logged in the console. The
pageType of the request, if set in the
env namespace, is also logged. | https://pwa.moovweb.com/v6.9.1/guides/http_status_codes | CC-MAIN-2019-22 | refinedweb | 227 | 80.51 |
You can subscribe to this list here.
Showing
11
results of 11
Sorry you had this problem. We're talking about ways to handle it.
In the future, it might save you time debugging this sort of thing to
look at the python code that PSP generates. You'll find it in the
Cache/PSP directory.
Jay
On Mon, 2001-12-24 at 04:12, Frank Barknecht wrote:
> Hi,
>
> the following problem has cost me hours: Is it a bug, or did I
> misinterpret the PSP documentation?
>
> In PSPages (0.6.1-beta), this doesn't work and gives an Indentation
> Error:
>
> <% for i in range(5): %>
> <% if i > 2: %>
> Hi, i'm a big number <%= i %><br>
> <% end %>
> <% else: %>
> Hi, i'm rather small <%= i %><br>
> <% end %>
> <% end %>
>
> but if I put the "else" right after the "end" it does work:
>
> <% for i in range(5): %>
> <% if i > 2: %>
> Hi, i'm a big number <%= i %><br>
> <% end %><% else: %>
> Hi, i'm rather small <%= i %><br>
> <% end %>
> <% end %>
>
> I think, the first version looks much better and more natural to me,
> doesn't it?
>
> Ciao,
> --
> __ __
> Frank Barknecht ____ ______ ____ __ trip\ \ / /wire ______
> / __// __ /__/ __// // __ \ \/ / __ \\ ___\
> / / / ____/ / / / // ____// /\ \\ ___\\____ \
> /_/ /_____/ /_/ /_//_____// / \ \\_____\\_____\
> /_/ \_\
>
> _______________________________________________
> Webware-discuss mailing list
> Webware-discuss@...
>
On Wed, 26 Dec 2001 00:30:40 -0500, Edmund Lian wrote:
?
Tip: If you want to have very simple MK/MySQL support of=
transactions you can
do like this (at least it works for me :-).
1) Subclass MySQLObjectStore and override saveChanges:
class MyMySQLObjectStore(MySQLObjectStore):
def saveChanges(self):
self.executeSQL('BEGIN')
try:
MySQLObjectStore.saveChanges(self)
except Exception, e:
self.executeSQL('ROLLBACK')
raise e
else:
self.executeSQL('COMMIT')
2) Make sure that your MySQL server supports transactions, i.e=
install InnoDB.
You don't need 4.0 to do this, a recent 3.23 is OK. The prebuilt=
Windows
binaries already have InnoDB installed but on Linux you may have=
to install
it separately. Read more about this on.
3) Edit GeneratedSQL/Create.sql and add TYPE=3DInnoDB after every=
create table:
create table MyClass (
myClassId int not null primary key=
auto_increment,
) TYPE=3DInnoDB;
The last step must be repeated every time you generate your SQL=
files from
the object model :-(
But if we ask Chuck kindly he might fix this in the MK generate=
stuff :-).
It would be perfect if one could set an option in=
Settings.config, like
'MySQLTableType':'InnoDB'.
/Stefan
Richard Gordon wrote:
>.<<
What I thought, but I was trying very hard to avoid a flamefest on the
list! Oh well, you had the balls and I didn't! :-)
BTW, PostgreSQL may not scale to the degree that Oracle does, but it really
performs better than Oracle in situations other than the Ultimate Ellison
Dream, which is the 80-90% of the realworld. I'm sure you know this link
already, but here's a benchmark (yes, there are lies and then there are
benchmarks!):
...Edmund.
At 5:32 PM -0800 12/27/01, Chuck Esterbrook wrote:
>MySQL 4.0 supports transactons.
It might. Or it might not. The reports I've seen have been too mixed
to consider MySQL for serious work any time soon. And in any case, I
gather that it's layered on top of the Berkeley DB for this purpose,
so why not just use that anyway? I would not seriously consider the
use of MySQL for anything that involved real time inventories, etc.
> I'm not sure if they have row level
>locks yet or not. Also, if MySQL needs more work, so what?.
Richard Gordon
--------------------
Gordon Design
Web Design/Database Development).
Curious about Chuck's comment that MK helps with generating
forms, object views, etc. Is this closely tied to MK, or do
you just find the process of developing from an object
model instead of data model gives you this? (not sure how i
mean that..)
Enjoy,
Luke
=====
------------------
Reference Counting Garbage Collection:
Look out philosophy majors, things really DO
cease to exist when no one is looking at them!
------------------
__________________________________________________
Do You Yahoo!?
Send your FREE holiday greetings online!
Chuck wrote:
.
On Thursday 27 December 2001 11:19 am, Mike Orr wrote:
> > Friday 21 December 2001 08:35 am, Ng Pheng Siong wrote:
> I know of the following problems:
>
> 1. Application.py's line 731, "assert isinstance(factory,
> ServletFactory)", fails for PSPExamples (but not those that came
> before):
That usually indicates the infamous "duplicate module problem" whereby
Python has managed to load the same module twice through different
paths. It's a subtle bug that WebKit once suffered from. I'm still
under the impression that Python should fix this by tracking the
absolute paths of modules, but I don't know the internals well enough
for a patch and my suggestion was not well received on c.l.p.
> $ python HttpAppServer.py
> [...]
What directory is your HttpAppServer.py located in when you launch it?
If in WebKit/ you might consult Launch.py for the solution.
> 2. PushServlet's output is buffered.
>
> 3. Shutdown works with IE (on Windows) but not with Netscape (Windows
> or FreeBSD). I don't have another browser to test.
If you have FreeBSD you can probably get lynx, Galeon, opera, etc.
Strange that it doesn't work.
-Chuck
On Thursday 27 December 2001 11:19 am, Mike Orr wrote:
> On Thu, Dec 27, 2001 at 10:13:07AM -0800, Chuck Esterbrook wrote:
> > Maybe that wouldn't be too hard since you could put this in your
> > SitePage.
>
> Hmm, it would have to be someplace where it could be overridden, for
> the servlets that do use ExtraPathInfo.
Right. Perhaps awake() invokes self.examineExtraPath() info which is
implemented as "pass" for those servlets that don't like it. (Or you
could go the other direction.) You could even use a mix-in although it
wouldn't save you that much typing.
> > > ????????2) don't worry, be happy. ?But any relative links on such
> > > a page will contain that path junk and so will loop back to that
> > > page. ?(Echoes of the Twilight Zone's "Judgement Day" episode...)
> >
> > Hey, that's always good advice. ;-)
> >
> >
> > | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200112&viewday=28 | CC-MAIN-2014-41 | refinedweb | 1,030 | 74.49 |
Milind,
Am I missing something here? This was supposed to be a discussion and am hoping thats why
you started the thread. I don't see anywhere any conspiracy theory being considered or being
talked about. Vinod asked some questions, if you can't or do not want to respond I suggest
you skip emailing or ignore rather than making false assumptions and accusations. I hope the
intent here is to contribute code and stays that way.
thanks
mahadev
On Oct 6, 2013, at 5:58 PM, Milind Bhandarkar <mbhandarkar@gopivotal.com> wrote:
> Vinod,
>
> I have received a few emails about concerns that this effort somehow
> conflicts with federated namenodes. Most of these emails are from folks
> who are directly or remotely associated with Hortonworks.
>
> Three weeks ago, I sent emails about this effort to a few Hadoop
> committers who are primarily focused on HDFS, whose email address I had.
> While 2 out of those three responded to me, the third person associated
> with Hortonworks, did not.
>
> Is Hortonworks concerned that this proposal conflicts with their
> development on federated namenode ? I have explicitly stated that it does
> not, and is orthogonal to federation. But I would like to know if there
> are some false assumptions being made about the intent of this
> development, and would like to quash any conspiracy theories right now,
> before they assume a life of their own.
>
> Thanks,
>
> Milind
>
>
> -----Original Message-----
> From: Vinod Kumar Vavilapalli [mailto:vinodkv@hortonworks.com]
> Sent: Sunday, October 06, 2013 12:21 PM
> To: hdfs-dev@hadoop.apache.org
> Subject: Re: [Proposal] Pluggable Namespace
>
> In order to make federation happen, the block pool management was already
> separated. Isn't that the same as this effortt?
>
> Thanks,
> +Vinod
>
> On Oct 6, 2013, at 9:35 AM, Milind Bhandarkar wrote:
>
>> Federation is orthogonal with Pluggable Namespaces. That is, one can
>> use Federation if needed, even while a distributed K-V store is used
>> on the backend.
>>
>> Limitations of Federated namenode for scaling namespace are
>> well-documented in several places, including the Giraffa presentation.
>>
>> HBase is only one of the several namespace implementations possible.
>> Thus, if HBase-based namespace implementation does not fit your
>> performance needs, you have a choice of using something else.
>>
>> - milind
>>
>> -----Original Message-----
>> From: Azuryy Yu [mailto:azuryyyu@gmail.com]
>> Sent: Saturday, October 05, 2013 6:41 PM
>> To: hdfs-dev@hadoop.apache.org
>> Subject: Re: [Proposal] Pluggable Namespace
>>
>> Hi Milind,
>>
>> HDFS federation can solve the NN bottle neck and memory limit problem.
>>
>> AbstractNameSystem design sounds good. but distributed meta storage
>> using HBase should bring performance degration.
>> On Oct 4, 2013 3:18 AM, "Milind Bhandarkar"
>> <mbhandarkar@gopivotal.com>
>> wrote:
>>
>>> Hi All,
>>>
>>> Exec Summary: For the last couple of months, we, at Pivotal, along
>>> with a couple of folks in the community have been working on making
>>> Namespace implementation in the namenode pluggable. We have
>>> demonstrated that it can be done without major surgery on the
>>> namenode, and does not have noticeable performance impact. We would
>>> like to contribute it back to Apache if there is sufficient interest.
>>> Please let us know if you are interested, and we will create a Jira
>>> and
>> update the patch for in-progress work.
>>>
>>>
>>> Rationale:
>>>
>>> In a Hadoop cluster, Namenode roughly has following main
>> responsibilities.
>>> . Catering to RPC calls from clients.
>>> . Managing the HDFS namespace tree.
>>> . Managing block report, heartbeat and other communication from data
>> nodes.
>>>
>>> For Hadoop clusters having large number of files and large number of
>>> nodes, name node gets bottlenecked. Mainly for two reasons . All the
>>> information is kept in name node's main memory.
>>> . Namenode has to cater to all the request from clients / data nodes.
>>> . And also perform some operations for backup and check pointing node.
>>>
>>> A possible solution is to add more main memory but there are certain
>>> issues with this approach . Namnenode being Java application, garbage
>>> collection cycles execute periodically to reclaim unreferenced heap
>>> space. When the heap space grows very large, despite of GC policy
>>> chosen, application stalls during the GC activity. This creates a
>>> bunch of issues since DNs and clients may perceive this stall as NN
>>> crash.
>>> . There will always be a practical limit on how much physical memory
>>> a single machine can accommodate.
>>>
>>> Proposed Solution:
>>>
>>> Out of the three responsibilities listed above, we can refactor
>>> namespace management from the namenode codebase in such a way that
>>> there is provision to implement and plug other name systems other
>>> than existing in-process memory-based name system. Particularly a
>>> name system backed by a distributed key-value store will
>>> significantly reduce namenode memory requirement.To achieve this, a
>>> new generic interface will be introduced [Let's call it
>>> AbstractNameSystem] which defines set of operations using which we
>>> perform the namespace management. Namenode code that used to
>>> manipulate some java objects maintained in namenode's heap will now
> operate on this interface.
>>> There will be provision for others to extend this interface and plug
>> their own NameSystem implementation.
>>>
>>> To get started, we have implemented the same memory-based namespace
>>> implementation in a remote process, outside of the namenode JVM. In
>>> addition, work is undergoing to implement the namesystem using HBase.
>>>
>>> Details of Changes:
>>>
>>> Created new class called AbstractNamesystem, existing FSNamesystem is
>>> a subclass of this class. Some code from FSNamesystem has been moved
>>> to its parent. Created a Factory class to create object of NS
>>> management class.Factory refers to newly added config properties to
>>> support pluggable name space management class. Added unit tests for
>>> Factory. Replaced constructors with factory calls, this is because
>>> the namesystem instances should now be created based on configuration.
>>> Added new config properties to support pluggable name space
>>> management class. This property will decide which Namesystem class
>>> will be instantiated by the factory. This change is also reflected in
>>> some DFS related webapps [JSP files] where namesystem instance is
>>> used to obtain
>> DFS health and other stats.
>>>
>>> These changes aim to make the namesystem pluggable without changing
>>> high level interfaces, this is particularly tricky since memory-based
>>> name system functionality is currently baked into these interfaces,
>>> and ultimate goal is to make the high level interface free from
>>> memory-based name system.
>>>
>>> Consideration for Upgrade and Rollback:
>>>
>>> Current memory based implementation already has code to read from and
>>> write to fsimage , we will have to make them publicly accessible
>>> which will enable us to upgrade an existing cluster from FSNamespace
>>> to newly added name system in future version.
>>>
>>> a. Upgrades: By making use of existing Loader class for reading
>>> fsimage we can write some code load this image into the future name
>>> system implementation.
>>>
>>> b. Rollback: Are even simpler, we can preserve the old fsimage and
>>> start the cluster with that image by configuring the cluster to use
>>> current file system based name system.
>>>
>>> Future work
>>>
>>> Current HDFS design is such that FSNameSystem is baked into even high
>>> level interfaces, this is a major hurdle in cleanly implementing
>>> pluggable name systems. We aim to propose a change in such interfaces
>>> into which FSNameSystem is tightly coupled.
>>>
>>> - Milind
>>>
>>>
>>> ---
>>> Milind Bhandarkar
>>> Chief Scientist
>>> Pivotal
>>>
>
>
> --
>. | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201310.mbox/%3CAA0AF4CC-A64A-46A8-9B0B-F1299E7793FA@hortonworks.com%3E | CC-MAIN-2018-09 | refinedweb | 1,170 | 54.83 |
in reply to
Why no warnings for 1 in void context?
And just when you think you've got that figured out, check out these:
$ perl -wce '1; "ds"; "di"; "ig"; 0'
-e syntax OK
$
[download]
-- Randal L. Schwartz, Perl hacker
Be sure to read my standard disclaimer if this is a reply.]%?
Speed was not my concern, and I certainly wouldn't
quibble over a 10% speed issue. (The user generally
won't even notice a speed difference until it's at
least 50%.)
The reason I am looking forward to dumping all the
legacy Perl4 malarke is because of its impact on
coding, especially in terms of unexpected behavior
and bugginess.
Consider, for instance, an issue we recently
ran into with SVG::Metadata, as follows: this
module had been using XML::Twig to parse the metadata
from SVG files. (SVG is an XML format.) Bryce had
set it up to use Twig's simplify method to produce
a hash-reference structure, which the code in
Metadata.pm's parse routine was navigating in more
or less the usual way, e.g.,
$metadata->{'rdf'}->{'Work'}
and so forth.
However, as we started dealing with SVG produced
by different SVG editors, we found that we were
running into namespace issues; some SVG might
list those elements as RDF:rdf and cc:Work, for
example. So we put in code, similar to the
above, but with extra logic like
$elt->{'rdf'} || $elt->{'rdf:RDF'} ||$elt->{'RDF'}
to check for those. Oops. Do you see
the problem? Neither did we, at first.
What on earth is a pseudohash? Turns out, it's
some kind of inane legacy thing from the days of
yore before Perl supported references, and as a
result, you aren't allowed to test for a hash
key containing a colon, if it might not exist.
Serious bugs result. We ended up dumping the
simplify()'ed hash-reference structure entirely
in favor of XML::Twig objects and their methods,
like
$elt->first_descendant('rdf:RDF')
Granted, I'm more comfortable with the methods
anyway, now that I have taken the trouble to
download about half of XML::Twig's rather lengthy
POD into my brain, and it does allow us to dispense
with checking for optional intermediate wrapper
elements, and the resulting code is probably less
buggy too; nevertheless, I spent hours ironing
out this issue. I seriously doubt I'm the only
one ever to bang into it. Who would have thought
that a hash key (which is, after all, just a string)
would have issues with containing a colon? Boo, hiss.
There are other legacy Perl4 things besides the two
we've discussed in this thread. They're all going
away in Perl6, and I'm glad.
Hell yes!
Definitely not
I guess so
I guess not
Results (48 votes),
past polls | http://www.perlmonks.org/?node_id=475482 | CC-MAIN-2014-52 | refinedweb | 474 | 69.82 |
Things used in this project
Story
The aim of this project was to improve on my previous candy dispenser by making a new one that is more interactive and aesthetically pleasing.
Video:
A Little Background.
Designing the Machine
Up to this point, I have had quite a bit of experience with my favorite CAD program- Fusion 360. It has everything a maker needs in its workflow to go from an idea to a finished part in CAM. I knew I would have to build something sleeker and more modern, along with using fewer parts. I started out with making a simple box, and finding out how I should arrange a button and LCD to allow for user interaction.
Next was finding the right board to use for running it. I decided upon using a Raspberry Pi Zero W due to its small size and wireless capabilities.
I also chose to use a stepper motor to drive a paddle which would dispense wrapped candy from a funnel. Next, I thought to myself: “How do I add an interesting way to interact with this device?” I had already used Google Assistant, so why not use Twitter? The latest tweet shows up on the LCD, along with whom it’s from. Now that I have a design, it’s time to start building it!
Manufacturing the Parts
The first step in creating a part is making sure it will fit. My design fits in a 10.5 inch cube (26.67 cm),which is the maximum size my router can handle. To generate the gcode I used the CAM environment that is built into Fusion 360. I created a setup for each part, setting my stock to be 266.7mm by 266.7mm and 20mm thick. I did this because my material is ¼ inch thick (6.35mm), therefore ensuring that the bit will cut all the way through. After generating a setup, I then configured the path for cutting out the part. I chose a 2d contour and set the speed to 18000 rpm because I was using the DeWalt DWP611 Router. There are more settings as well but I won’t go into that. For more information go to the resources page here:
To get the parts go to: to view and download the parts for CNC routing.
I cut out the pieces and sanded them down thoroughly, ensuring I wouldn’t get any splinters and the paint would stick. Next, I coated the outer pieces with 2 coats of sprayable shellac, and then sanded once more to even out the surfaces. Finally, I gave each piece three coats of laquer spray paint. I painted the box part white, and the funnel black.
I also 3D printed a paddle for the stepper motor and a button that slots into the front plate.
You can find those designs here:
Software and Hardware
Because I had chosen to use the Raspberry Pi Zero W in my project, I had to set it up first. To accomplish that, I wrote the latest Raspbian Jessie with PIXEL () onto a 16GB SD card. Then I booted up the Raspberry Pi and waited for the green light to go solid (all files had been written). I shut down the board and removed the SD card, inserting it into a computer running Linux (so I could see all of the files on the ext4 filesystem). To enable SSH I made a blank file (NO extensions) and named it “ssh”, and I also created a new text file called “wpa_supplican.conf”. In it I put:
network={ssid="SSID" psk="PASSWORD" key_mgmt=WPA-PSK }
Now, reboot the Raspberry Pi Zero W and SSH into it. Enter:
sudo raspi-config
and enable VNC and UART. Exit and reboot it again. Now, SSH and enter
vncserver
then
sudo startx
. Exit out of that session and open VNC Viewer on your PC. You are now completely set up and ready to run some code!
Start by downloading the files I included, which includes the “main.py” file. Run
sudo apt-get update
and then
sudo apt-get install build-essential python-dev python-smbus python-pip git
Make sure RPi.GPIO is installed with
sudo python2 -m pip install
Next install the LCD library by running:
cd ~ git clone cd Adafruit_Python_CharLCD sudo python2 setup.py install
Then, go to and create a new app. Put in the name, description, and a phony website. Next, go to the “Keys and Access Tokens” section, and generate the oauth access tokens. Paste those into in the order of: Consumer Key, Consumer Secret, Access Token, and finally Access Token Secret.
Now that the software side is done, it is time to wire everything together. Start by connecting the LCD to the Pi.
Pi (BCM Pin Mapping) | LCD 27 | RS 22 | EN 25 | D4 24 | D5 23 | D6 18 | D7 4 | Backlight (not used) Potentiometer Wiper (middle pin) | V0 (contrast) +5v | VDD, Backlight +, Potentiometer + GND | VSS, R/W, Backlight -, Potentiometer -
Connect a pushbutton (tact switch) to Pi pin 5 and GND, along with a common cathode RGB LED. Red: 16, Green: 20, Blue: 21. Also attach an Arduino Nano’s GND to the Pi’s GND, along with a jumper going from Pi UART Tx (pin 14) to the Nano’s Rx.
For the stepper motor, connect coil pair A to l293d outputs 1 and 2. Connect pair B to outputs 3 and 4. Attach inputs 1,2,3,4 to the Nano’s pins 8,9,10,11, along with connecting their GNDs. Get a 12v power supply (I used a 12v adapter with a DC barrel jack) and connect it to both the l293d and VIn, along with its GND going to the Nano and the driver. That’s it! Now the wiring is now complete.
Putting It All Together
Go ahead and assemble the parts together. I used screws and wooden blocks to secure the parts together, and it enables for the machine to be taken apart easily.
Now, you can tweet at it: “@(your Twitter app handle) (a phrase with ‘color’ and ‘red’, ‘green’ or ‘blue’)” to change the color of the LED, or just tweet anything!
Push the button to rotate the inside paddle to dispense candy. Have fun with your new Raspberry Pi Powered Candy Dispenser.
Custom parts and enclosures
Schematics
Code
Arduino Nano codeArduino
#include <Stepper.h> const int stepsPerRevolution = 200; Stepper myStepper(stepsPerRevolution, 8, 9, 10, 11); void setup() { myStepper.setSpeed(30); Serial.begin(115200); } void loop() { if(Serial.available()>0){ char data = Serial.read(); if (data == 'c'){ Serial.println("Giving candy!"); myStepper.step(50); } } }
Raspberry Pi CodePython
#import libraries from twython import Twython from time import sleep import math import time import serial import Adafruit_CharLCD as LCD import RPi.GPIO as GPIO #setup pin numbers lcd_rs = 27 lcd_en = 22 lcd_d4 = 25 lcd_d5 = 24 lcd_d6 = 23 lcd_d7 = 18 lcd_backlight = 4 #LCD dimensions lcd_columns = 20 lcd_rows = 4 #make an LCD object with our pin variables lcd = LCD.Adafruit_CharLCD(lcd_rs, lcd_en, lcd_d4, lcd_d5, lcd_d6, lcd_d7, lcd_columns, lcd_rows, lcd_backlight) #RGB LED pin numbers redP = 16 greenP = 20 blueP = 21 buttonP = 5 #Tact switch pin number ser = serial.Serial(port='/dev/ttyS0',baudrate=115200,timeout=1) #create a serial object for UART communication pin_list = [redP,greenP,blueP] #put RGB LED pins in an array (simpler to control) GPIO.setmode(GPIO.BCM) #BCM pin numbering for GPIO GPIO.setwarnings(False) #Don't give a warning on startup about GPIO for pin in pin_list: #Make every pin an output GPIO.setup(pin, GPIO.OUT) GPIO.setup(buttonP, GPIO.IN, pull_up_down=GPIO.PUD_UP) #make button pin detect a FALLING edge (pull up resistor) prev_message = "" #make a previous message variable to detect duplicate tweets def reset_lcd(): #make a function that will clear the LCD lcd.clear() lcd.home() #create out Twitter app variables APP_KEY = "KEY" APP_SECRET = "SECRET" OAUTH_TOKEN = "TOKEN" OAUTH_TOKEN_SECRET = "TOKEN_SECRET" #make a twitter object with out variables twitter = Twython(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET) #turn off all LED colors def all_off(): for pin in pin_list: GPIO.output(pin,GPIO.LOW) all_off() #turn off the LED def candy_dispense(channel): #This will send the command to the Arduino Nano to dispense candy reset_lcd() lcd.message("Dispensing candy\nBe sure to say\n'Trick-or-treat'!") ser.write('c') sleep(5) reset_lcd() GPIO.add_event_detect(buttonP, GPIO.FALLING, callback=candy_dispense,bouncetime=300) #add an interrupt that will call candy_dispense while 1: #infinite loop tweet = twitter.get_mentions_timeline()[0] #get the latest tweet in which you are mentioned reset_lcd() #clear the LCD print "Latest tweet: "+tweet['text']+"\n\n" #print latest wteet to console text_len = len(tweet['text']) #get the length of the tweet #make the text fit in a 20x4 space if text_len < 20: lcd.message(tweet['text']) elif text_len > 20 and text_len < 41: lcd.message(tweet['text'][0:20]+'\n'+tweet['text'][20:40]) elif text_len > 20 and text_len < 61: lcd.message(tweet['text'][0:20]+'\n'+tweet['text'][20:40]+'\n'+tweet['text'][40:60]) elif text_len > 20 and text_len < 81: lcd.message(tweet['text'][0:20]+'\n'+tweet['text'][20:40]+'\n'+tweet['text'][40:60] \ + '\n'+ tweet['text'][60:80]) if prev_message != tweet['text']: #check if "color" and which color were mentioned in tweet (will change the LED color) if "color" in tweet['text'].lower(): if "red" in tweet['text'].lower(): all_off() GPIO.output(redP,GPIO.HIGH) elif "green" in tweet['text'].lower(): all_off() GPIO.output(greenP,GPIO.HIGH) elif "blue" in tweet['text'].lower(): all_off() GPIO.output(blueP,GPIO.HIGH) if "off" in tweet['text'].lower(): all_off() #if the word "candy" appears in the tweet, give out candy if "candy" in tweet['text'].lower(): reset_lcd() lcd.message("Dispensing candy\nBe sure to say\n'Trick-or-treat'!") ser.write('c') sleep(2) sleep(10) #pause for 10 seconds (You can only update the tweet every 20 seconds due to Twitter query limits) reset_lcd() #clear the LCD #say who it's from and when it will update again lcd.message("From user: \n@"+tweet['user']['screen_name']+"\n"+ "This will update in\n20 seconds.") print "From user: \n@"+tweet['user']['screen_name']+"\nThis will update every 20 seconds\n--------------------------------------\n" sleep(10) #pause for 10 more seconds prev_message = tweet['text'] #make the previous message the current one (to check for a duplicate tweet)
Credits
Arduino “having11” Guy
Replications
Did you replicate this project? Share it!I made one
Love this project? Think it could be improved? Tell us what you think! | https://www.hackster.io/gatoninja236/raspberry-pi-powered-candy-dispenser-fd018f | CC-MAIN-2018-09 | refinedweb | 1,740 | 65.01 |
I’m used to debugging issues with logs or metrics when they are presented to me on a lovely dashboard with an intuitive UI. However, if for some reason the dashboard isn’t being populated or if a particular service’s logs are unavailable, debugging gets trickier. Now these instances are usually few and far between, but they do happen and being familiar with tools to debug what’s happening to a process on a host is pretty valuable during these times.
When I’m debugging something that the logs or metrics aren’t surfacing, I ssh into hosts. Of course, this isn’t scalable or elegant or any of the myriad things the internet has told us, but for ad-hoc analysis, this works surprisingly well for me.
Just like, you know, with print statements and debugging.
Let me make it very clear right now that I’m not an SRE or an Operations engineer. I’m primarily a developer who also happens to deploy the code I write and debug it when things go wrong. As often as not, when I’m on a host I’ve never been before, the hardest thing for me is finding things. Like for instance, what port is a process listening on? Or more importantly, what file descriptor is a particular daemon logging to? And even when I do manage to find answers to these questions by dint of a mix of
ps,
pstree,
ls and lots and lots of wishful
grepping, many a time the “answers” I get surface zero information or get me just plain incorrect data.
If this were a talk by Raymond Hettinger, the core CPython developer, this would be the moment when the audience would be expected to say there must be a better way.
And there is.
What’s become my go-to tool for finding things is a pretty nifty tool called
lsof.
lsof (pronounced
el-soff, though some tend to be partial towards
liss-off or just
el-es-o-eff) is an incredibly useful command that lists all open files.
lsofis great for finding things because in Unix every thing is a file.
lsof is an astonishingly versatile debugging tool that can quite easily replace
ps,
netstat etc. in one’s workflow.
Options … an embarrassment of riches
A veteran SRE who has been SREing for decades before the term “SRE” was even coined once told me — “ I stopped learning options for lsof once I had what I needed. Learn the most important ones and that’s all you’ll mostly ever need.”
lsof comes with an extensive list of options.
NAMElsof - list open filesSYNOPSISl]
The man page would be the best reference if you’re interested in what each option does. The ones I most commonly use are the following:
-u— This lists all files opened by a specific user. The following example lists the number of files held open by the user
cindy
cindy@ubuntu:~$ lsof -u cindy | wc -l248
In general, when an option is preceeded by a
^ , it implies a negation. So if we want to know the number of files on this host that’s opened by all other users except
cindy :
cindy@ubuntu:~$ lsof -u^cindy | wc -l38193
2.
-U — This option selects all Unix Domain Socket files.
cindy@ubuntu:~$ lsof -U | head -5COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEinit 1 root 7u unix 0xffff88086a171f80 0t0 24598 @/com/ubuntu/upstartinit 1 root 9u unix 0xffff88046a22b480 0t0 22701 socketinit 1 root 10u unix 0xffff88086a351180 0t0 39003 @/com/ubuntu/upstartinit 1 root 11u unix 0xffff880469006580 0t0 16510 @/com/ubuntu/upstart
3.
-c — This lists all files held open by processes executing the command that begins with the characters of
c. For example, if you want to see the first 15 files held open by all Python processes running on a given host:
cindy@ubuntu:~$ lsof -cpython | head -15COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEpython2.7 16905 root cwd DIR 9,1 4096 271589387 /home/cindy/sourceboxpython2.7 16905 root rtd DIR 9,1 4096 2048 /python2.7 16905 root txt REG 9,1 3345416 268757001 /usr/bin/python2.7python2.7 16905 root mem REG 9,1 11152 1610852447 /usr/lib/python2.7/lib-dynload/resource.x86_64-linux-gnu.sopython2.7 16905 root mem REG 9,1 101240 1610899495 /lib/x86_64-linux-gnu/libresolv-2.19.sopython2.7 16905 root mem REG 9,1 22952 1610899509 /lib/x86_64-linux-gnu/libnss_dns-2.19.sopython2.7 16905 root mem REG 9,1 47712 1610899515 /lib/x86_64-linux-gnu/libnss_files-2.19.sopython2.7 16905 root mem REG 9,1 33448 1610852462 /usr/lib/python2.7/lib-dynload/_multiprocessing.x86_64-linux-gnu.sopython2.7 16905 root mem REG 9,1 54064 1610852477 /usr/lib/python2.7/lib-dynload/_json.x86_64-linux-gnu.sopython2.7 16905 root mem REG 9,1 18936 1610619044 /lib/x86_64-linux-gnu/libuuid.so.1.3.0python2.7 16905 root mem REG 9,1 30944 1207967802 /usr/lib/x86_64-linux-gnu/libffi.so.6.0.1python2.7 16905 root mem REG 9,1 136232 1610852472 /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.sopython2.7 16905 root mem REG 9,1 77752 1610852454 /usr/lib/python2.7/lib-dynload/parser.x86_64-linux-gnu.sopython2.7 16905 root mem REG 9,1 387256 1610620979 /lib/x86_64-linux-gnu/libssl.so.1.0.0
More interestingly, if you have a bunch of Python2.7 and Python 3.6 processing running on a host, you can find the list of files held open by the non-Python2.7 processes:
cindy@ubuntu:~$ lsof -cpython -c^python2.7 | head -10COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEpython 20017 root cwd DIR 9,1 4096 2048 /python 20017 root rtd DIR 9,1 4096 2048 /python 20017 root txt REG 9,1 3345416 268757001 /usr/bin/python2.7python 20017 root mem REG 9,1 11152 1610852447 /usr/lib/python2.7/lib-dynload/resource.x86_64-linux-gnu.sopython 20017 root mem REG 9,1 6256 805552236 /usr/lib/python2.7/dist-packages/_psutil_posix.x86_64-linux-gnu.sopython 20017 root mem REG 9,1 14768 805552237 /usr/lib/python2.7/dist-packages/_psutil_linux.x86_64-linux-gnu.sopython 20017 root mem REG 9,1 10592 805451779 /usr/lib/python2.7/dist-packages/Crypto/Util/strxor.x86_64-linux-gnu.sopython 20017 root mem REG 9,1 11176 1744859170 /usr/lib/python2.7/dist-packages/Crypto/Cipher/_ARC4.x86_64-linux-gnu.sopython 20017 root mem REG 9,1 23560 1744859162 /usr/lib/python2.7/dist-packages/Crypto/Cipher/_Blowfish.x86_64-linux-gnu.so
4.
+d— This helps you search for all open instances of any directory and its top level files and directories.
cindy@ubuntu:~$ lsof +d /usr/bin | head -4COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEcircusd 1351 root txt REG 9,1 3345416 268757001 /usr/bin/python2.7docker 1363 root txt REG 9,1 19605520 270753792 /usr/bin/dockerrunsvdir 1597 root txt REG 9,1 17144 272310314 /usr/bin/runsvdir
5.
-d — By far the option I most commonly use, next only to
-p. This option specifies a list of comma separated file descriptors to include/exclude. From the docs:participating in AND option selection.When there are exclusion and inclusion members in the set,lsof reports them as errors and exits with a non-zero return code.
6.
-p — I can’t recall a time I’ve used lsof without ever using this option, which lists all files held open by a given pid.
On Ubuntu, to find out all files held open by, say, pid 1
Whereas on my MacBook Air:
7.
-P — This option inhibits the conversion of port numbers to port names for network files. It is also useful when port name lookup is not working properly.
This can be used in combination with another option —
-n , which inhibits the conversion of network numbers to host names for network files. It is also useful when host name lookup is not working properly.
Inhibiting both the aforementioned conversion can sometimes make lsof run faster.
8.
-i — This option selects the listing of files any of whose Internet address matches the address specified in
i. If no address is specified, this option selects the listing of all Internet and network files.
With lsof, one can, for instance, look at the TCP connection your Slack or Dropbox client has open. For fun, try seeing how many connections your Chrome tabs (each tab is a standalone process) has open.
lsof -i -a -u $USER | grep Slack
With lsof, one can also look at all the TCP sockets opened by your local Dropbox client.
lsof also allows one to look at UDP connections open with
lsof -iUDP
lsof -i 6 will get you the list of IPv6 connections open.
9.
-t — This options suppresses all other information except the process ID’s — and I often use this when I want to pipe the pids to some other function, mostly
kill-9
cindy@ubuntu:~$ lsof -t /var/log/dummy_svc.log1235
2171
2188
2189
16758
16761
16762
Combining Options
Generally, lsof will OR the results of more than one option is used. Specifying the
-a option will give you a logical AND of the results.
Of course, there are several exceptions to this rule and again, the man page is your friend here, but the TL: DR is:
Normally list options that are specifically stated are ORed - i.e., specifying the -ioption without an address and the -ufoo option''.
A warstory … of sorts.
OK, so I’m stretching the truth here by calling this a “warstory”, but still a time when lsof came in handy.
A couple of weeks ago, I had to stand up a a single instance of a new service in a test environment. The test service in question wasn’t hooked up to the production monitoring infrastructure. I was trying to debug why a process that was freshly launched wasn’t registering itself with Consul and was therefore undiscoverable by another service. Now I don’t know about you, but if something isn’t working as expected, I look at the logs of the service I’m trying to debug, and in most cases the logs point to the root cause right away.
This service in question was being run under circus, a socket and process manager. Logs for processes run under circus are stored in a specific location on the host — let’s call it
/var/log/circusd. Newer services on the host run under a different process manager called s6 which logs to a different namespace. Then there’s the logs generated by socklog/svlogd which again live somewhere else. In short, there’s no dearth of logs and the problem was just to find to what file descriptor my crashing process was logging to.
Since I knew the process I was trying to debug was running under circus, tailing
/var/log/circusd/whatever_tab_completion_suggested would allow me to look at the stdout and stderr streams for this process. Except, tailing the logs showed absolutely nothing. Quickly it became evident I was looking at the wrong log file and sure enough, upon closer inspection, there were two files under
/var/log/circusd one called
stage-svcname-stderr.logand the other called
staging-svcname.stderr.log and tab completion was picking the wrong file.
One way to see which file was actually being used by the process in question to log to is to run
lsof -l filename which displays all the processes that have an open file descriptor to it. It turned out no running process was holding on to the log file I was tailing — which meant it was safe for deletion. Tailing the other immediately showed me why the process was crashing (and circus was restarting it after the crash— leading to a crash loop).
Conclusion
The more I use it, the more it replaces a bunch of other tools and surfaces more actionable information. A far more interesting post would be one on how lsof works internally — but that post is a WIP right now. | https://copyconstruct.medium.com/lsof-f2b224eee7b5 | CC-MAIN-2021-10 | refinedweb | 2,024 | 62.68 |
Obtaining Thread State
Every thread moves through several states from its creation to its termination. The possible states of a thread are: NEW, RUNNABLE, WAITING, BLOCKED and TERMINATED. Immediately after the creation of a thread, it will be in the NEW state. After the start( ) method of the Thread class is executed, it will move to the RUNNABLE state. When the thread completes its execution, it will move to the TERMINATED stage.
If a thread is instructed to wait, it moves to the WAITING state. When the waiting is over, the thread once again moves to the RUNNABLE state. You can obtain the current state of a thread by calling the getState( ) method defined by Thread. It returns a value of type Thread.State that indicates the state of the thread at the time at which the call was made. State is an enumeration defined by Thread. Given a Thread instance, you can use getState( ) to obtain the state of a thread.
Program
Program Source
class NewThread extends Thread { public void run() { for(int i=10;i<=50;i+=10) { System.out.println(getName()+" : "+i); } } } public class Javaapp { public static void main(String[] args) { NewThread th1 = new NewThread(); System.out.println("Thread th1 State : "+th1.getState()); th1.start(); System.out.println("Thread th1 State : "+th1.getState()); try { th1.join(); }catch(InterruptedException ie) { System.out.println("Main Thread Interrupted"); } System.out.println("Thread th1 State : "+th1.getState()); } }
3 thoughts on “Java-Obtaining Thread State”
I browsed through this site and there’s so much useful information, bookmarked
thanks gfor the info
Many thanks for helping people find the info they need. Good stuff as usual. Keep up the good work!!!–Figure-out-tips-on-how-to-get-professional-essays– | https://hajsoftutorial.com/java-obtaining-thread-state/ | CC-MAIN-2019-47 | refinedweb | 288 | 68.97 |
Software Engineer
Welcome! Happy to see you in the first part of my Vuejs Amsterdam Conference 2022 summary series, in which I share a summary of all the talks with you. You can read my JSWorld Conference 2022 Summary 2022 series (in four parts) here, where I summarized all the talks of the first day.
:
State of the Vuenion
Evan You - Creator at Vue.js
The talk was an overview of the Eco System and all the new things that have been happening. On February 7th, 2022 Vue 3 has become the default version alongside the new Brand new Vuejs.org Website.
Evan explains the adoption of Vue3 by the Community, Eco System updates with the Release of Nuxt3 RC since April 21st, Vuetify 3 Beta released on May 19th, and VitePress 1.0 Alpha and the work in progress on Vite 3.0.
State of the Vuenion 2022 - Evan You
Vue 3 adoption
Currently, Vue 3 has around 800k weekly npm downloads (measured by npm downloads of @vue/runtime-core), which is +70% since the default version launch, and if we look back one year from now, that's 4x in the past year, and it counts for more than 25% of all Vue downloads and this number will probably get much higher by the end of the year.
Ecosystem updates
Nuxt 3
Nuxt 3 is now in RC. A lot of work went into it and the core will also be working actively with Nuxt 3 to stabilize some of the final pieces such as suspense hopefully in 3.3.
Vuetify 3
To a lot of people, Nuxt and Vuetify are two of the major pieces that block them from upgrading from Vue 2 to Vue 3. But now, Vuetify 3 hit beta on May 19th too and that’s good news.
VitePress
VitePress has just released 1.0 alpha and has been used in the new Vue documentation.
They focused more on the lower-level details, but right now they are working on completely overhauling the VitePress default theme. Now it comes with dark mode and has a consistent design language with the official view documentation but with a distinction so that you know this is a VitePress site instead of the actual view documentation.
There are some small breaking changes but we are really excited because VitePress has really been proven to be a pleasure to work with when we're working on the official docs and it's super flexible and super powerful as well. We are still debating whether we should officially make it the to replace VuePress, just call the VuePress 3 or maybe it should remain a separate project but the idea is if you're looking for a Vue 3 powered static site generator VitePress will be the official recommendation.
Volar
If you're using v3 you probably know Volar which is the new recommended VSCode extension.
Starting march Vue has been sponsoring Johnson Chu who is the author of Volar, to work full time on improving it.
Johnson has been cranking out releases and bug fixes and working on refactoring the internal code base to make it more efficient and even cover more features
Vite 3.0 (WIP)
Another important piece of the new ecosystem is Vite and the team is working on version 3.
As you may know, node 12 went end of life, and that was the initial motivator for Vite 3, which drops the support for node 12.
This is not a big rewrite but it comes with a number of relatively minor breaking changes.
We always actively work with all the ecosystem partners who are building higher-level tools on top of Vite to ensure these changes comes with a reasonable amount of changes needed to upgrade.
If you’re using just the basic features of Vite you most likely won’t be affected much, also if you’re using a higher-level tool like Nuxt 3 or other frameworks on top of Vite, this upgrade path will probably be more or less transparent for you because the higher-level framework will absorb the underlying changes from Vite.
But there is still a chance of shipping some potentially breaking changes in return for big benefits:
Moving Vite itself to full ESM
SSR build defaults to ESM output
Both of these are also part of our overall efforts to make Vite an important factor in pushing the whole JS ecosystem towards pure ESM. So both of these will hopefully make the whole ecosystem transition process to ESM a bit faster.
Non-blocking dependency discovery + optimization
One of the reasons Vite can start up so fast despite having huge dependencies is that it scans your code base to look for dependencies and then pre-optimizes them.
But the initial implementation comes with two limitations:
The scan phase can be costly if your codebase is big.
Sometimes the scan phase will fail to discover dependencies because some of the code may introduce the dependency after it's been transformed by an actual plugin, so it has to wait until the app loads.
In 2.9 they introduced an improvement so that it no longer late discover dependencies and it can be non-blocking so it optimizes as it discovers them.
In 3.0 it is going to make this whole process seamless: No longer need the scan phase, no more late discovered dependencies.
Vite will discover dependencies as it serves your modules and it'll automatically wait for everything to be finalized to do one single optimization run.
Esbuild-powered dep optimization for both development server and production builds, More consistent behavior between dev/prod
For packages that are authored in commonJS, Vite used to use esbuild to process dependencies during development and rollup commonJS plugin to build the application for production, which creates inconsistency between development and production.
In 3.0 the goal is to eliminate that by using esbuild for both phases to ensure the same outcome, especially for commonJS, and it is expected to land within the next month.
Vue Core
During April and May they spent approximately a whole month on v3 core bug squash, which resulted in massive patch releases (3.2.24~26), ~70 PRs merges, and ~140 closed issues.
All of this is a preparation for other work that’s paving the road for 3.3 because we want to make sure that we get a good sense of the current outstanding bugs and make sure that we have a stable foundation to build upon for the next generation of new features.
SFC Playground
SFC Playground has been the recommended way to provide bug reproductions for v3, but there are two categories of bugs that are hard to reproduce in the SFC Playground:
Behavior inconsistency for
<script setup>between production and development mode:
We’ve seen quite a few bugs in this category in the past and most of them cannot be reproduced in the SFC Playground because the SFC Playground was production by default. So we added a toggle so that you can toggle between the prod/dev mode so you can actually show us the behavior inconsistency in the case that it happens.
So, SFC Playground now supports switching between prod/dev mode for
<script setup>compilation.
SSR hydration mismatch bugs:
In the past usually, you would have to create a whole repository with the full SSR setup just to show them a simple bug happening. But now SFC Playground supports SSR reproductions. That means the whole - Full compile → render → hydrate pipeline running in the browser - pipeline happens entirely in the browser and all you need to do is just toggle a button.
Vue 2.7
Many who are stuck on Vue 2 have been asking about it and it's been delayed for various reasons. But they finally started working on it and made great progress.
Scope
Porting Composition API to become a Built-in for Vue 2 instead of using @vue/composition-api plugin.
You can directly do the same
import { ref } from ‘vue’as you would do in Vue 3 and these implementations are also more tightly integrated with Vue 2 reactivity system, so they are more efficient than the plugin version.
<script setup>
For many people script setup has been essential to use composition API, because it just makes your life so much easier.
If we are porting composition API to Vue 2 then it would make sense to also push setup because it’s such an essential part of the DX.
Improved TypeScript support
They will not change the current shipped types of Vue 2, especially for Vue extend because they want to retain complete type compatibility for existing Vue 2 projects. Instead, they are going to have separate Vue 3 types also shipped in the types but you get them when you use the new
defineComponentAPI that's available in Vue 3.
defineComponentwill also allow you to define components but with the types, that are directly ported back from Vue 3, which makes it easier for you to upgrade to Vue 3 as well and it also makes it easier for Volar to support both Vue 2 and Vue 3 at the same time.
Preparation Work
Vue 2 codebase is moved to TypeScript!
This is a huge shout-out to Pikax (Carlos) who spearheaded to work, made the huge pr and then also to David Walsh who helped with some cleanup, so i picked up from their work and managed to convert the whole codebase to typescript.
It is also now a
pnpm monorepo, with a modernized test setup with Vitest.
Tests actually run much faster than Vue 3 now, so i need to really port the Vue 3 test to Vitest as well!
Vitest Is a new and fast unit test runner internally powered by Vite and has a fully Jest compatible interface.
Jest requires everything to have its own Jest version of it which is a painful process to configure with Vue-based setups, but if your project is running on Vite because Vitest is also powered by Vite, it’s going to be a smooth transition because Vitest can directly pick up your Vite configurations and you don’t need to configure it separately.
Current state
Composition API is already fully ported with actually named exports and full type support, but with similar restrictions from the
@vue/composition-api plugin.
Vue 2.7 is currently in the alpha stage and 2.7.0-alpha.4 was released on npm under v2-alpha distribution tag, and they are focused on compatibility and stability testing.
Next Steps
The next step is to port
<script setup> support .
There is a lot of logic — e.g.
@vue/component-compiler-utils - that is split into separate packages and should move back into the Vue 2 repo similar to how v3 does it which makes it easier to sync changes across the places and keep everything consistent.
The new single file compiler logic will be exposed as
vue/compiler-sfc the same way as Vue 3. This also means when you upgrade to 2.7 and opt in to the new setup, it will be fully compatible with the existing setups. You can then just remove
vue-template-compiler as a peer dependency and then upgrade to
vue-loader 16+ or just directly use
@vitejs/plugin-vue because the single file component compiler will have the exact same interface as Vue 3.
Our goal is so that once you upgrade to 2.7, you’ll be able to just leverage the latest versions of Vue loader or the Vite plugin Vue for both Vue 2 and Vue 3 so you no longer need to use a separate Vite plugin Vue 2 or you’re stuck on the old version of Vue loader.
It will reach Beta after
<script setup> is ported and the estimated release is the end of June 2022.
Implications
2.7 will be the final minor release of Vue 2.x and there will be no new features added to Vue 2.
We will still fix bugs obviously, we'll make sure view 2.7 has a smooth upgrade path.
Vue 2 will have 18 months of LTS starting from the release of 2.7 stable. That means Vue will be End Of Life by the end of the year 2023.
We may consider offering extended support on a case-by-case basis so this will likely be a paid service. So if you’re interested, you can register interest at link.vuejs.org/xlts. So we do want to make sure that our main bandwidth is invested in Vue 3 and into the future but also understand that some of Vue 2 users may have reasons to have to stay on Vue 2 for longer than expected so we want to make sure there is a good solution for users in that scenario.
Vue 3.3
Major planned features
- Stabilize Suspense
- Stabilize Reactivity Transform
- SSR Improvements
- Lazy / Conditional hydration for async components
- Improved SSR mismatch warnings
- Support imported types in
<script setup>type-based-macros
Work on this version starts after the 2.7 stable release and hopefully get it ready by Q3 2022.
Experimental New Compilation Strategy
This is a truly experimental early exploration phase of the feature! So it even may not land!
The majority of users use Vue through the template and most of us in our day-to-day work interact with Vue through the single-file component format and this is a source format.
Vue as a framework has the opportunity to compile the file into vanilla JavaScript and CSS, and this step allows Vue to be a super compiler-oriented framework.
This new compilation strategy is inspired by solid.js. Instead of compiling templates into Virtual DOM render functions, it compiles them into imperative DOM initialization and reactive binding setup code.
Imagine this component:
// <script setup> import { ref } from 'vue' const count = ref(0)
<template> <div> <button :{{ count }}</button> </div> </template>
This button contains several bindings.
The output code will look something like this:
import { ref, effect } from 'vue' import {template, on, setClass, setAttr, setText } from 'vue/vapor' const t0 = template(`<div><button>`) export default () => { const count = ref(0) let div = t0() let button = div.firstChild let _button_class, _button_id, _button_text effect(() => { setClass(button, _button_class, (_button_class = {red: count.value % 2 })) setAttr(button, 'id', _button_id, (_button_id = `foo-${count.value}`)) setText(button, _button_text, count.value) }) on(button, 'click', () => count.value++) return div }
The general idea is instead of generating a virtual DOM representation of the tree first and then walking it to initialize the actual DOM, we will analyze the base HTML structure of the template during compilation time and stringify it into the plain HTML without those bindings, then we initialize them directly using a template clone node which is faster than initializing all these nodes individually and stitch them together.
Then we generate imperative code that directly locates the nodes that need bindings. Then we generate code that set up the effect directly by setting up reactive updates for class, attribute bindings, and inner text.
Some of the benefits:
- Better v-for performance even without v-memo
- Significantly lower memory usage
- Significantly lighter base runtime size (if fully opt-in)
- Potentially lighter component abstraction cost
Adoption Strategy:
- Opt-in at the component level
- Embed in the existing Vue 3 virtual DOM-based app
- Fully compatible with existing libraries
- Opt-in at the app level
Entirely drops Virtual DOM runtime
Cannot use Virtual DOM-based components
Suitable for extremely performance-sensitive use cases
This doesn't really change the development aspect of it. The template syntax and the way you write your components will remain exactly the same. This is purely just how the compiler handles the output.
End of The First Talk
I hope you enjoyed this part and it can be as valuable to you as it was to me.
Over the next few days, I’ll share the rest of the talks with you. Stay tuned… | https://hackernoon.com/state-of-the-vuenion-part-i?ref=hackernoon.com | CC-MAIN-2022-40 | refinedweb | 2,680 | 55.47 |
! :)
g++ filename.cpp
will make a.out file.
to run a.out file type: ./a.out
file will run.
can make a pernimate file by typing:
g++ -o pernimatefile filenametocompile.cpp
>> will create a executable pernimatefile that wont be over writen everytime you
compile a different program.
Thanks. This is what I have now. test.cpp includes written code.
#include <iostream> #include <stdlib.h> using namespace std; int main() { system("g++ -o -Wall C:\\Users\\%USERNAME%\\Documents\\test.cpp"); system("g++ ./test.o"); }
How can I test that program with examples? Sorry for beeing "noob" but this is completely new field for me, I've never done anything like this before.. :) | http://www.daniweb.com/software-development/cpp/threads/417109/how-to-make-an-evaluator | CC-MAIN-2014-15 | refinedweb | 111 | 71.92 |
.
I never set out to do a ”(you can) learn Scalaz in X days.” day 1 was written on Auguest 31, 2012 while Scalaz 7 was in milestone 7. Then day 2 was written the next day, and so on. It’s a web log of ”(me) learning Scalaz.” As such, it’s terse and minimal. Some of the days, I spent more time reading the book and trying code than writing the post.
Before we dive into the details, today I’m going to try a prequel to ease you in. Feel free to skip this part and come back later.
There have been several Scalaz intros, but the best I’ve seen is Scalaz talk by Nick Partridge given at Melbourne Scala Users Group on March 22, 2010:
Scalaz talk is up - Lots of code showing how/why the library exists— Nick Partridge (@nkpart) March 28, 2010
I’m going to borrow some material from it.
Scalaz consists of three parts:
Validation,
NonEmptyList, etc)
OptionOps,
ListOps, etc)
Nick.
Nick demonstrates an example of ad-hoc polymorphism by gradually making
sum function more general, starting from a simple function that adds up a list of
Ints:
scala> def sum(xs: List[Int]): Int = xs.foldLeft(0) { _ + _ } sum: (xs: List[Int])Int scala> sum(List(1, 2, 3, 4)) res3: Int = 10
If we try to generalize a little bit. I’m going to pull out a thing called
Monoid. … It’s a type for which there exists a function
mappend, which produces another type in the same set; and also a function that produces a zero.
scala> object IntMonoid { def mappend(a: Int, b: Int): Int = a + b def mzero: Int = 0 } defined module IntMonoid
If we pull that in, it sort of generalizes what’s going on here:
scala> def sum(xs: List[Int]): Int = xs.foldLeft(IntMonoid.mzero)(IntMonoid.mappend) sum: (xs: List[Int])Int scala> sum(List(1, 2, 3, 4)) res5: Int = 10
Now we’ll abstract on the type about
Monoid, so we can define
Monoidfor any type
A. So now
IntMonoidis a monoid on
Int:
scala> trait Monoid[A] { def mappend(a1: A, a2: A): A def mzero: A } defined trait Monoid scala> object IntMonoid extends Monoid[Int] { def mappend(a: Int, b: Int): Int = a + b def mzero: Int = 0 } defined module IntMonoid
What we can do is that
sum take a
List of
Int and a monoid on
Int to sum it:
scala> def sum(xs: List[Int], m: Monoid[Int]): Int = xs.foldLeft(m.mzero)(m.mappend) sum: (xs: List[Int], m: Monoid[Int])Int scala> sum(List(1, 2, 3, 4), IntMonoid) res7: Int = 10
We are not using anything to do with
Inthere, so we can replace all
Intwith a general type:
scala> def sum[A](xs: List[A], m: Monoid[A]): A = xs.foldLeft(m.mzero)(m.mappend) sum: [A](xs: List[A], m: Monoid[A])A scala> sum(List(1, 2, 3, 4), IntMonoid) res8: Int = 10
The final change we have to take is to make the
Monoidimplicit so we don’t have to specify it each time.
scala> def sum[A](xs: List[A])(implicit m: Monoid[A]): A = xs.foldLeft(m.mzero)(m.mappend) sum: [A](xs: List[A])(implicit m: Monoid[A])A scala> implicit val intMonoid = IntMonoid intMonoid: IntMonoid.type = IntMonoid$@3387dfac scala> sum(List(1, 2, 3, 4)) res9: Int = 10
Nick didn’t do this, but the implicit parameter is often written as a context bound:
scala> def sum[A: Monoid](xs: List[A]): A = { val m = implicitly[Monoid[A]] xs.foldLeft(m.mzero)(m.mappend) } sum: [A](xs: List[A])(implicit evidence$1: Monoid[A])A scala> sum(List(1, 2, 3, 4)) res10: Int = 10
Our
sumfunction is pretty general now, appending any monoid in a list. We can test that by writing another
Monoidfor
String. I’m also going to package these up in an object called
Monoid. The reason for that is Scala’s implicit resolution rules: When it needs an implicit parameter of some type, it’ll look for anything in scope. It’ll include the companion object of the type that you’re looking for.
scala> :paste // Entering paste mode (ctrl-D to finish) trait Monoid[A] { def mappend(a1: A, a2: A): A def mzero: A } object Monoid { implicit val IntMonoid: Monoid[Int] = new Monoid[Int] { def mappend(a: Int, b: Int): Int = a + b def mzero: Int = 0 } implicit val StringMonoid: Monoid[String] = new Monoid[String] { def mappend(a: String, b: String): String = a + b def mzero: String = "" } } def sum[A: Monoid](xs: List[A]): A = { val m = implicitly[Monoid[A]] xs.foldLeft(m.mzero)(m.mappend) } // Exiting paste mode, now interpreting. defined trait Monoid defined module Monoid sum: [A](xs: List[A])(implicit evidence$1: Monoid[A])A scala> sum(List("a", "b", "c")) res12: String = abc
You can still provide different monoid directly to the function. We could provide an instance of monoid for
Intusing multiplications.
scala> val multiMonoid: Monoid[Int] = new Monoid[Int] { def mappend(a: Int, b: Int): Int = a * b def mzero: Int = 1 } multiMonoid: Monoid[Int] = $anon$1@48655fb6 scala> sum(List(1, 2, 3, 4))(multiMonoid) res14: Int = 24
What we wanted was a function that generalized on
List. … So we want to generalize on
foldLeftoperation.
scala> object FoldLeftList { def foldLeft[A, B](xs: List[A], b: B, f: (B, A) => B) = xs.foldLeft(b)(f) } defined module FoldLeftList scala> def sum[A: Monoid](xs: List[A]): A = { val m = implicitly[Monoid[A]] FoldLeftList.foldLeft(xs, m.mzero, m.mappend) } sum: [A](xs: List[A])(implicit evidence$1: Monoid[A])A scala> sum(List(1, 2, 3, 4)) res15: Int = 10 scala> sum(List("a", "b", "c")) res16: String = abc scala> sum(List(1, 2, 3, 4))(multiMonoid) res17: Int = 24
Now we can apply the same abstraction to pull out
FoldLefttypeclass.
scala> :paste // Entering paste mode (ctrl-D to finish) trait FoldLeft[F[_]] { def foldLeft[A, B](xs: F[A], b: B, f: (B, A) => B): B } object FoldLeft { implicit val FoldLeftList: FoldLeft[List] = new FoldLeft[List] { def foldLeft[A, B](xs: List[A], b: B, f: (B, A) => B) = xs.foldLeft(b)(f) } } def sum[M[_]: FoldLeft, A: Monoid](xs: M[A]): A = { val m = implicitly[Monoid[A]] val fl = implicitly[FoldLeft[M]] fl.foldLeft(xs, m.mzero, m.mappend) } // Exiting paste mode, now interpreting. warning: there were 2 feature warnings; re-run with -feature for details defined trait FoldLeft defined module FoldLeft sum: [M[_], A](xs: M[A])(implicit evidence$1: FoldLeft[M], implicit evidence$2: Monoid[A])A scala> sum(List(1, 2, 3, 4)) res20: Int = 10 scala> sum(List("a", "b", "c")) res21: String = abc
Both
Int and
List are now pulled out of
sum.
In the above example, the traits
Monoid and
FoldLeft correspond to Haskell’s typeclass. Scalaz provides many typeclasses.
All this is broken down into just the pieces you need. So, it’s a bit like ultimate ducktyping because you define in your function definition that this is what you need and nothing more.
If we were to write a function that sums two types using the
Monoid, we need to call it like this.
scala> def plus[A: Monoid](a: A, b: A): A = implicitly[Monoid[A]].mappend(a, b) plus: [A](a: A, b: A)(implicit evidence$1: Monoid[A])A scala> plus(3, 4) res25: Int = 7
We would like to provide an operator. But we don’t want to enrich just one type, but enrich all types that has an instance for
Monoid. Let me do this in Scalaz 7 style.
scala> trait MonoidOp[A] { val F: Monoid[A] val value: A def |+|(a2: A) = F.mappend(value, a2) } defined trait MonoidOp scala> implicit def toMonoidOp[A: Monoid](a: A): MonoidOp[A] = new MonoidOp[A] { val F = implicitly[Monoid[A]] val value = a } toMonoidOp: [A](a: A)(implicit evidence$1: Monoid[A])MonoidOp[A] scala> 3 |+| 4 res26: Int = 7 scala> "a" |+| "b" res28: String = ab
We were able to inject
|+| to both
Int and
String with just one definition.
Using the same technique, Scalaz also provides method injections for standard library types like
Option and
Boolean:
scala> 1.some | 2 res0: Int = 1 scala> Some(1).getOrElse(2) res1: Int = 1 scala> (1 > 10)? 1 | 2 res3: Int = 2 scala> if (1 > 10) 1 else 2 res4: Int = 2
I hope you could get some feel on where Scalaz is coming from..
Here’s build.sbt to test Scalaz 7.1.0:._"
All you have to do now is open the REPL using sbt 0.13.0:
$ sbt console ... [info] downloading ... import scalaz._ import Scalaz._ Welcome to Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_51). Type in expressions to have them evaluated. Type :help for more information. scala>
There’s also API docs generated for Scalaz 7.1.0. pointed out to me:
@eed3si9n hey, was reading your scalaz tutorials. you should encourage people to use =/= and not /== since the latter has bad precedence.— Eiríkr Åsheim (@d6) September 6, 2012.
LYAHFGG:
Readis sort of the opposite typeclass of
Show. The
readfunction takes a string and returns a type which is a member of
Read.
I could not find Scalaz equivalent for this typeclass.
LYAHFGG:
Enummembers are sequentially ordered types — they can be enumerated. The main advantage of the
Enumtypeclass is that we can use its types in list ranges. They also have defined successors and predecessors,
Option[T] for max values.
Numis a numeric typeclass. Its members have the property of being able to act like numbers.
I did not find Scalaz equivalent for
Num,
Floating, and
Integral.
I am now going to skip over to Chapter 8 Making Our Own Types and Typeclasses (Chapter 7 if you have the book) since the chapters in between are mostly about Haskell syntax..
Yesterday we reviewed a few basic typeclasses from Scalaz like
Equal by using Learn You a Haskell for Great Good as the guide. We also created our own
CanTruthy typeclass.)
Note that the operation is only applied to the last value in the Tuple, (see scalaz group discussion).] { self => def point[A](a: => A): F[A] /** alias for `point` */ def pure[A](a: => A): F[A] = point(a) ... }
So
Applicative extends another typeclass
Apply, and introduces
point and its alias
pure..
Scalaz likes the name
point instead of
pure, and it seems like it’s basically a constructor that takes value
A and returns
F[A]. It doesn’t introduce an operator, but it introduces] = Some(12)
As expected.
*>
We can use
<*>:
scala> 9.some <*> {(_: Int) + 3}.some res57: Option[Int] = Some(12) scala> 3.some <*> { 9.some <*> {(_: Int) + (_: Int)}.curried.some } res58: Option[Int] = Some(12)
Another thing I found in 7.0.0-M3 is a new notation that extracts values from containers and apply them to a single function:
scala> ^(3.some, 5.some) {_ + _} res59: Option[Int] = Some(8) scala> ^(3.some, none].
This can be done in Scalaz, but not easily.
scala> streamZipApplicative.ap(Tags.Zip(Stream(1, 2))) (Tags.Zip(Stream({(_: Int) + 3}, {(_: Int) * 2}))) res32: scala.collection.immutable.Stream[Int] with Object{type Tag = scalaz.Tags.Zip} = Stream(4, ?) scala> res32.toList res33: List[Int] = List(4, 4)
We’ll see more examples of tagged type tomorrow..
Yesterday we started with
Functor, which adds
map operator, and ended with polymorphic
sequenceA function that uses
Pointed[F].point and Applicative
^(f1, f2) {_ :: _} syntax. in Scala 2.10, so I wrote one: kind.scala. With George Leontiev (@folone), who sent in scala/scala#2340, and others’ help
:kind command is now part of Scala 2.11. Let’s try using it:
scala> :k Int scala.Int's kind is A scala> :k -v Int scala.Int's kind is A * This is a proper type. scala> :k -v Option scala.Option's kind is F[+A] * -(+)-> * This is a type constructor: a 1st-order-kinded type. scala> :k -v Either scala.util.Either's kind is F[+A1,+A2] * -(+)-> * -(+)-> * This is a type constructor: a 1st-order-kinded type. scala> :k -v Equal scalaz.Equal's kind is F[A] * -> * This is a type constructor: a 1st-order-kinded type. scala> :k -v Functor scalaz.Functor's kind is X[F. Using Scala’s type variable notation this could be written as
A.
* -> *. Using Scala’s type variable notation they could be written as
F[+A] and
F[+A1,+A2].
(* -> *) -> *. Using Scala’s type variable notation this could be written as
X[F[A]].
In case of Scalaz 7.1,
Equal and others have the kind
F[A] while
Functor and all its derivatives have the kind
X[F[A]].
Scala encodes (or complects) the notion of type class using type constructor, and the terminology tend get jumbled up. For example, the data structure
List forms a functor, in the sense that an instance
Functor[List] can be derived for
List. Since there should be only one instance for
List, we can say that
List is a functor. See the following discussion for more on “is-a”:
In FP, "is-a" means "an instance can be derived from." @jimduey #CPL14 It's a provable relationship, not reliant on LSP.— Jessica Kerr (@jessitron) February 25, 2014
Since
List is
F[+A], it’s easy to remember that
F relates to a functor. Except, the typeclass definition
Functor needs to wrap
F[A] around, so its kind is
X[F[A]]. To add to the confusion, the fact that Scala can treat type constructor as a first class variable was novel enough, that the compiler calls first-order kinded type as “higher-kinded type”:
scala> trait Test { type F[_] } <console>:14: warning: higher-kinded type should be enabled by making the implicit value scala.language.higherKinds visible. This can be achieved by adding the import clause 'import scala.language.higherKinds' or by setting the compiler option -language:higherKinds. See the Scala docs for value scala.language.higherKinds for a discussion why the feature should be explicitly enabled. type F[_] ^
You normally don’t have pointed out to me that it should be
Equal[A].
@eed3si9n love the scalaz cheat sheet start, but using the type param F usually means Functor, what about A instead?— Adam Rosien (@arosien) September 1, 2012
Now it makes sense why! = ""
LYAHFGG:
So now that there are two equally valid ways for numbers (addition and multiplication) to be monoids, which way do choose? Well, we don’t have to.
This is where Scalaz 7.1.
Yesterday we reviewed kinds and types, explored Tagged type, and started looking at
Semigroup and
Monoid as a way of abstracting binary operations over various types. arbitrary values. Here’s the
build.sbt to check from REPL:.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_45). Type in expressions to have them evaluated. Type :help for more information. scala>
Here’s how you test if
List meets the functor laws:
scala> functor.laws[List].check + functor.identity: OK, passed 100 tests. + functor.associative: OK, passed 100 tests..apply} res56: scalaz.@@[Boolean,scalaz.Tags.Disjunction] = true
This surely beats writing
Tags.Disjunction(true) for each of them and connecting them with
|+|.
We will pick it up from here later. I’ll be out on a business trip, it might slow down.
On day 4 we reviewed typeclass laws like Functor laws and used ScalaCheck to validate on arbitrary examples of a typeclass. We also looked at three different ways of using
Option as Monoid, and looked at
Foldable that can
foldMap etc.ulates the pole balancing better:
scala> Monad[Option].point(Pole(0, 0)) >>= {_.landLeft(1)} >>= {_.landRight(4)} >>= {_.landLeft(-1)} >>= {_.landRight(-2)} res23: Option[Pole] = None
It works. their instances to implement
plus and
empty, but at the type constructor (
F[_]) level.
Plus introduces
<+> operator to append two containers:
scala> List(1, 2, 3) <+> List(4, 5, 6) res43: List[Int] = List(1, 2, 3, 4, 5, 6)
MonadPlus introduces
filter operation.
scala> (1 |-> 50) filter { x => x.shows contains '7' } res46: List[Int] = List(7, 17, 27, 37, 47)
WriterOps:
final class WriterOps[A](self:Tell (in scalaz 7.0 it was MonadWriter) to help us out:
scala> MonadTell[Writer, String] res62: scalaz.MonadTell[scalaz.Writer,String] = scalaz.WriterTInstances$$anon$1@6b8501fa scala> MonadTell[Writer, String].point(3).run res64: (String, Int) = ("",3))
Like the book let’s write a microbenchmark to compare the performance:.
On day 6 we reviewed
for syntax and checked out the
Writer monad and the reader monad, which is basically using functions as monads.Ops.
On day 7 we reviewed Applicative Builder, and looked at
State monad,
\/ monad, and
Validation. Let’s continue on.
Learn You a Haskell for Great Good says:
In this section, we’re going to explore a few functions that either operate on monadic values or return monadic values as their results (or both!). Such functions are usually referred to as monadic functions.
In Scalaz
Monad extends
Applicative, so there’s no question that all monads are functors. This means we can use
map or
<*> operator.
LYAHFGG:
It turns out that any nested monadic value can be flattened and that this is actually a property unique to monads. For this, the
joinfunction exists.
In Scalaz
join (and its symbolic alias
μ) is a method introduced by
Bind:
trait BindOps[F[_],A] extends Ops[F[A]] { ... def join[B](implicit ev: A <~< F[B]): F[B] = F.bind(self)(ev(_)) def μ[B](implicit ev: A <~< F[B]): F[B] = F.bind(self)(ev(_)) ... }
Let’s try it out:
scala> (Some(9.some): Option[Option[Int]]).join res9: Option[Int] = Some(9) scala> (Some(none): Option[Option[Int]]).join res10: Option[Int] = None scala> List(List(1, 2, 3), List(4, 5, 6)).join res12: List[Int] = List(1, 2, 3, 4, 5, 6) scala> 9.right[String].right[String].join res15: scalaz.Unapply[scalaz.Bind,scalaz.\/[String,scalaz.\/[String,Int]]]{type M[X] = scalaz.\/[String,X]; type A = scalaz.\/[String,Int]}#M[Int] = \/-(9) scala> "boom".left[Int].right[String].join res16: scalaz.Unapply[scalaz.Bind,scalaz.\/[String,scalaz.\/[String,Int]]]{type M[X] = scalaz.\/[String,X]; type A = scalaz.\/[String,Int]}#M[Int] = -\/(boom)
LYAHFGG:
The
filterMfunction from
Control.Monaddoes just what we want! … The predicate returns a monadic value whose result is a
Bool.
In Scalaz
filterM is implemented in several places.
trait ListOps[A] extends Ops[List[A]] { ... final def filterM[M[_] : Monad](p: A => M[Boolean]): M[List[A]] = l.filterM(self)(p) ... }
scala> List(1, 2, 3) filterM { x => List(true, false) } res19: List[List[Int]] = List(List(1, 2, 3), List(1, 2), List(1, 3), List(1), List(2, 3), List(2), List(3), List()) scala> Vector(1, 2, 3) filterM { x => Vector(true, false) } res20: scala.collection.immutable.Vector[Vector[Int]] = Vector(Vector(1, 2, 3), Vector(1, 2), Vector(1, 3), Vector(1), Vector(2, 3), Vector(2), Vector(3), Vector())
LYAHFGG:
The monadic counterpart to
foldlis
foldM.
In Scalaz, this is implemented in
Foldable as
foldLeftM. There’s also
foldRightM too.
scala> def binSmalls(acc: Int, x: Int): Option[Int] = { if (x > 9) (none: Option[Int]) else (acc + x).some } binSmalls: (acc: Int, x: Int)Option[Int] scala> List(2, 8, 3, 1).foldLeftM(0) {binSmalls} res25: Option[Int] = Some(14) scala> List(2, 11, 3, 1).foldLeftM(0) {binSmalls} res26: Option[Int] = None
LYAHFGG:
When we were solving the problem of implementing a RPN calculator, we noted that it worked fine as long as the input that it got made sense.
I did not cover that chapter, but the code is here so let’s translate it into Scala:
scala> def foldingFunction(list: List[Double], next: String): List[Double] = (list, next) match { case (x :: y :: ys, "*") => (y * x) :: ys case (x :: y :: ys, "+") => (y + x) :: ys case (x :: y :: ys, "-") => (y - x) :: ys case (xs, numString) => numString.toInt :: xs } foldingFunction: (list: List[Double], next: String)List[Double] scala> def solveRPN(s: String): Double = (s.split(' ').toList.foldLeft(Nil: List[Double]) {foldingFunction}).head solveRPN: (s: String)Double scala> solveRPN("10 4 3 + 2 * -") res27: Double = -4.0
Looks like it’s working. The next step is to change the folding function to handle errors gracefully. Scalaz adds
parseInt to
String which returns
Validation[NumberFormatException, Int]. We can call
toOption on a validation to turn it into
Option[Int] like the book:
scala> "1".parseInt.toOption res31: Option[Int] = Some(1) scala> "foo".parseInt.toOption res32: Option[Int] = None
Here’s the updated folding function:
scala> def foldingFunction(list: List[Double], next: String): Option[List[Double]] = (list, next) match { case (x :: y :: ys, "*") => ((y * x) :: ys).point[Option] case (x :: y :: ys, "+") => ((y + x) :: ys).point[Option] case (x :: y :: ys, "-") => ((y - x) :: ys).point[Option] case (xs, numString) => numString.parseInt.toOption map {_ :: xs} } foldingFunction: (list: List[Double], next: String)Option[List[Double]] scala> foldingFunction(List(3, 2), "*") res33: Option[List[Double]] = Some(List(6.0)) scala> foldingFunction(Nil, "*") res34: Option[List[Double]] = None scala> foldingFunction(Nil, "wawa") res35: Option[List[Double]] = None
Here’s the updated
solveRPN:
scala> def solveRPN(s: String): Option[Double] = for { List(x) <- s.split(' ').toList.foldLeftM(Nil: List[Double]) {foldingFunction} } yield x solveRPN: (s: String)Option[Double] scala> solveRPN("1 2 * 4 +") res36: Option[Double] = Some(6.0) scala> solveRPN("1 2 * 4") res37: Option[Double] = None scala> solveRPN("1 8 garbage") res38: Option[Double] = None
LYAHFGG:
When we were learning about the monad laws, we said that the
<=<function is just like composition, only instead of working for normal functions like
a -> b, it works for monadic functions like
a -> m b.
Looks like I missed this one too.
In Scalaz there’s a special wrapper for function of type
A => M[B] called Kleisli:
sealed trait Kleisli[M[+_], -A, +B] { self => def run(a: A): M[B] ... /** alias for `andThen` */ def >=>[C](k: Kleisli[M, B, C])(implicit b: Bind[M]): Kleisli[M, A, C] = kleisli((a: A) => b.bind(this(a))(k(_))) def andThen[C](k: Kleisli[M, B, C])(implicit b: Bind[M]): Kleisli[M, A, C] = this >=> k /** alias for `compose` */ def <=<[C](k: Kleisli[M, C, A])(implicit b: Bind[M]): Kleisli[M, C, B] = k >=> this def compose[C](k: Kleisli[M, C, A])(implicit b: Bind[M]): Kleisli[M, C, B] = k >=> this ... } object Kleisli extends KleisliFunctions with KleisliInstances { def apply[M[+_], A, B](f: A => M[B]): Kleisli[M, A, B] = kleisli(f) }
We can use
Kleisli object to construct it:
scala> val f = Kleisli { (x: Int) => (x + 1).some } f: scalaz.Kleisli[Option,Int,Int] = scalaz.KleisliFunctions$$anon$18@7da2734e scala> val g = Kleisli { (x: Int) => (x * 100).some } g: scalaz.Kleisli[Option,Int,Int] = scalaz.KleisliFunctions$$anon$18@49e07991
We can then compose the functions using
<=<, which runs rhs first like
f compose g:
scala> 4.some >>= (f <=< g) res59: Option[Int] = Some(401)
There’s also
>=>, which runs lhs first like
f andThen g:
scala> 4.some >>= (f >=> g) res60: Option[Int] = Some(500)
As a bonus, Scalaz defines
Reader as a special case of
Kleisli as follows:
type ReaderT[F[+_], E, A] = Kleisli[F, E, A] type Reader[E, A] = ReaderT[Id, E, A] object Reader { def apply[E, A](f: E => A): Reader[E, A] = Kleisli[Id, E, A](f) }
We can rewrite the reader example from day 6 as follows:
scala> val addStuff: Reader[Int, Int] = for { a <- Reader { (_: Int) * 2 } b <- Reader { (_: Int) + 10 } } yield a + b addStuff: scalaz.Reader[Int,Int] = scalaz.KleisliFunctions$$anon$18@343bd3ae scala> addStuff(3) res76: scalaz.Id.Id[Int] = 19
The fact that we are using function as a monad becomes somewhat clearer here.
LYAHFGG:
In this section, we’re going to look at an example of how a type gets made, identified as a monad and then given the appropriate
Monadinstance. … What if we wanted to model a non-deterministic value like
[3,5,9], but we wanted to express that
3has a 50% chance of happening and
5and
9both have a 25% chance of happening?
Since Scala doesn’t have a built-in rational, let’s just use
Double. Here’s the case class:
scala> :paste // Entering paste mode (ctrl-D to finish) case class Prob[A](list: List[(A, Double)]) trait ProbInstances { implicit def probShow[A]: Show[Prob[A]] = Show.showA } case object Prob extends ProbInstances // Exiting paste mode, now interpreting. defined class Prob defined trait ProbInstances defined module Prob
Is this a functor? Well, the list is a functor, so this should probably be a functor as well, because we just added some stuff to the list.
scala> :paste // Entering paste mode (ctrl-D to finish) case class Prob[A](list: List[(A, Double)]) trait ProbInstances { implicit val probInstance = new Functor[Prob] { def map[A, B](fa: Prob[A])(f: A => B): Prob[B] = Prob(fa.list map { case (x, p) => (f(x), p) }) } implicit def probShow[A]: Show[Prob[A]] = Show.showA } case object Prob extends ProbInstances scala> Prob((3, 0.5) :: (5, 0.25) :: (9, 0.25) :: Nil) map {-_} res77: Prob[Int] = Prob(List((-3,0.5), (-5,0.25), (-9,0.25)))
Just like the book we are going to implement
flatten first.] { def map[A, B](fa: Prob[A])(f: A => B): Prob[B] = Prob(fa.list map { case (x, p) => (f(x), p) }) } implicit def probShow[A]: Show[Prob[A]] = Show.showA } case object Prob extends ProbInstances
This should be enough prep work for monad:
scala> :paste // Entering paste mode (ctrl-D to finish)] with Monad[Prob] { def point[A](a: => A): Prob[A] = Prob((a, 1.0) :: Nil) def bind[A, B](fa: Prob[A])(f: A => Prob[B]): Prob[B] = flatten(map(fa)(f)) override def map[A, B](fa: Prob[A])(f: A => B): Prob[B] = Prob(fa.list map { case (x, p) => (f(x), p) }) } implicit def probShow[A]: Show[Prob[A]] = Show.showA } case object Prob extends ProbInstances // Exiting paste mode, now interpreting. defined class Prob defined trait ProbInstances defined module Prob
The book says it satisfies the monad laws. Let’s implement the
Coin example:
scala> :paste // Entering paste mode (ctrl-D to finish) sealed trait Coin case object Heads extends Coin case object Tails extends Coin implicit val coinEqual: Equal[Coin] = Equal.equalA def coin: Prob[Coin] = Prob(Heads -> 0.5 :: Tails -> 0.5 :: Nil) def loadedCoin: Prob[Coin] = Prob(Heads -> 0.1 :: Tails -> 0.9 :: Nil) def flipThree: Prob[Boolean] = for { a <- coin b <- coin c <- loadedCoin } yield { List(a, b, c) all {_ === Tails} } // Exiting paste mode, now interpreting. defined trait Coin defined module Heads defined module Tails coin: Prob[Coin] loadedCoin: Prob[Coin] flipThree: Prob[Boolean] scala> flipThree res81: Prob[Boolean] = Prob(List((false,0.025), (false,0.225), (false,0.025), (false,0.225), (false,0.025), (false,0.225), (false,0.025), (true,0.225)))
So the probability of having all three coins on
Tails even with a loaded coin is pretty low.
We will continue from here later.
On day 8 we reviewed monadic functions
join,
filterM, and
foldLeftM, implemented safe RPN calculator, looked at
Kleisli to compose monadic functions, and implemented our own monad
Prob.
Anyway, let’s see some of the typeclasses that we didn’t have opportunity to cover.)
Scalaz 7.0 contains several typeclasses that are now deemed lawless by Scalaz project:
Length,
Index, and
Each. Some discussions can be found in #278 What to do about lawless classes? and (presumably) Bug in IndexedSeq Index typeclass. The three will be deprecated in 7.1, and removed in 7.2.
For running side effects along a data structure, there’s
Each:
trait Each[F[_]] { self => def each[A](fa: F[A])(f: A => Unit) }
This introduces
foreach method:
sealed abstract class EachOps[F[_],A] extends Ops[F[A]] { final def foreach(f: A => Unit): Unit = F.each(self)(f) }
Some of the functionality above can be emulated using
Foldable, but as @nuttycom suggested, that would force O(n) time even when the underlying data structure implements constant time for
length and
index. At that point, we’d be better off rolling our own
Length if it’s actually useful to abstract over
length.
If inconsistent implementations of these typeclasses were somehow compromising the typesafety I’d understand removing them from the library, but
Length and
Index sound like a legitimate abstraction of randomly accessible containers like
Vector.
There actually was another set of typeclasses that was axed earlier:
Pointed and
Copointed. There were more interesting arguments on them that can be found in Pointed/Copointed and Why not Pointed?:
Pointedhas no useful laws and almost all applications people point to for it are actually abuses of ad hoc relationships it happens to have for the instances it does offer.
This actually is an interesting line of argument that I can understand. In other words, if any container can qualify as
Pointed, the code using it either is not very useful or it’s likely making specific assumption about the instance.
@eed3si9n "axiomatic" would be better.— Miles Sabin (@milessabin) December 29, 2013
@eed3si9n Foldable too (unless it also has a Functor but then nothing past parametricity): - but Reducer has laws!— Brian McKenna (@puffnfresh) December 29, 2013).
Here’s an example of stacking
ReaderT, monad transformer version of
Reader on
Option monad.
scala> :paste // Entering paste mode (ctrl-D to finish) type ReaderTOption[A, B] = ReaderT[Option, A, B] object ReaderTOption extends KleisliInstances with KleisliFunctions {Instances with StateTFunctions {)
There’s a useful method that
Traverse introduces called
sequence. The names comes from Haskell’s
sequence function, so let’s Hoogle it:
`haskellOps[A] injecting two methods:
package scalaz package syntax trait TreeOps[A] extends Ops[A] { def node(subForest: Tree[A]*): Tree[A] = ... def leaf: Tree[A] = ... } trait ToTreeOps { implicit def ToTreeOps[A](a: A) = new TreeOps[A]{ def self = a } }
So these are injected methods to create
Tree.
scala> 1.node(2.leaf) res7: scalaz.Tree[Int] = <tree>
The same goes for
WriterOps[A],
ValidationOps[A],
ReducerOps())
Or, I’d like to call dim sum style, where they bring in a cart load of chinese dishes and you pick what you want.
If for whatever reason if you do not wish to import the entire
Scalaz._, you can pick and choose..
Yesterday we looked at what
import scalaz._ and
Scalaz._ bring into the scope, and also talked about a la carte style import. Knowing how instances and syntax are organized prepares us for the next step, which is to hack on Scalaz.
Before we start hacking on a project, it’s probably good idea to join its Google Group.
$ git clone -b series/7.1.x git://github.com/scalaz/scalaz.git scalaz
The above should clone
series/7.1.x branch into
./scalaz directory. Next I edited the
.git/config as follows:
[core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "upstream"] fetch = +refs/heads/*:refs/remotes/origin/* url = git://github.com/scalaz/scalaz.git [branch "series/7.1.x"] remote = upstream merge = refs/heads/series/7.1.x
This way,
scalaz/scalaz is referenced using the name
upstream instead of origin. To track the changes, run:
$ git pull --rebase Current branch series/7.1.x is up to date.
Next, launch sbt 0.13.5, set scala version to 2.11.1, switch to
core project, and compile:
$ sbt scalaz> ++ 2.11.1 Setting version to 2.11.1 [info] Set current project to scalaz (in build file:/Users/eed3si9n/work/scalaz/) scalaz> project core [info] Set current project to scalaz-core (in build file:/Users/eed3si9n/work/scalaz/).1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_33). Type in expressions to have them evaluated. Type :help for more information. scala> [Ctrl + D to exit].11.1:
$ git co scalaz-seven Switched to branch 'scalaz-seven' $ git branch snapshot $ git co snapshot $ git merge topic/vectorinstance
We can use this branch as a sandbox to play around with Scalaz.
I am committing this as “roll back <*> as infix of ap” and pushing it out.
$ git push fork topic/applyops ... * [new branch] topic/applyops -> topic/applyops
Send a pull request with some comments. Let’s apply this to our
snapshot branch:
$ git co snapshot $ git merge topic/applyops
So now it has both of the changes we created.[=>:].
Here’s
Category[=>:[_, _]]:
trait Category[=>:[_, _]] extends Compose[=>:] { self => /** The left and right identity over `compose`. */ def id[A]: A =>: A }
This)
Here’s
sequenceU:
scala> failedTree.sequenceU res3: scalaz.Validation[String,scalaz.Tree[Int]] = Failure(boom)
Boom..
We’ll cover some other topic later.
Yesterday we looked at
Arrow as a way of abstracting function-like things and
Unapply as a way of providing typeclass meta-instances. We also continued on with the applicative experiment by implementing
XProduct that supports parallel compositions.
Pure functions don’t imply they are computationally cheap. For example, calculate).
What is functional programming? Rúnar Óli defines it as:
programming with functions.
What’s a function?
f: A => Brelates every value of type of
Ato excatly one value of type
Band.
Yesterday we looked at
Memo for caching computation results, and
ST as a way of encapsulating mutation. Today we’ll continue into IO..
On day 17 we looked at IO monad as a way of abstracting side effects, and Iteratees as a way of handling streams. And the series ended..
I’ve updated this post quite a bit based on the guidance by Rúnar. See source in github for older revisions.
So we represent it as a syntax tree where subsequent commands are leaves of prior commands:
data Toy b next = Output b next | Bell next | Done
Here’s
Toy translated into Scala as is:
scala> :paste // Entering paste mode (ctrl-D to finish) sealed trait Toy[+A, +Next] case class Output[A, Next](a: A, next: Next) extends Toy[A, Next] case class Bell[Next](next: Next) extends Toy[Nothing, Next] case class Done() extends Toy[Nothing, Nothing] // Exiting paste mode, now interpreting. scala> Output('A', Done()) res0: Output[Char,Done] = Output(A,Done()) scala> Bell(Output('A', Done())) res1: Bell[Output[Char,Done]] = Bell(Output(A,Done()))
WFMM’s DSL takes the type of output data as one of the type parameters, so it’s able to handle any output types. As demonstrated above as
Toy, Scala can do this too. But doing so unnecessarily complicates the demonstration of of
Free because of Scala’s handling of partially applied types. So we’ll first hardcode the data type to
Char as follows:] def output[Next](a: Char, next: Next): CharToy[Next] = CharOutput(a, next) def bell[Next](next: Next): CharToy[Next] = CharBell(next) def done: CharToy[Nothing] = CharDone() } // Exiting paste mode, now interpreting. scala> import CharToy._ import CharToy._ scala> output('A', done) res0: CharToy[CharToy[Nothing]] = CharOutput(A,CharDone()) scala> bell(output('A', done)) res1: CharToy[CharToy[CharToy[Nothing]]] = CharBell(CharOutput(A,CharDone()))
I’ve added helper functions lowercase
output,
bell, and
done to unify the types to
CharToy.
WFMM:
but unfortunately this doesn’t work because every time I want to add a command, it changes the type.
Let’s define
Fix:
scala> :paste // Entering paste mode (ctrl-D to finish) case class Fix[F[_]](f: F[Fix[F]]) object Fix { def fix(toy: CharToy[Fix[CharToy]]) = Fix[CharToy](toy) } // Exiting paste mode, now interpreting. scala> import Fix._ import Fix._ scala> fix(output('A', fix(done))) res4: Fix[CharToy] = Fix(CharOutput(A,Fix(CharDone()))) scala> fix(bell(fix(output('A', fix(done))))) res5: Fix[CharToy] = Fix(CharBell(Fix(CharOutput(A,Fix(CharDone())))))
Again,
fix is provided so that the type inference works.
We are also going to try to implement
FixE, which adds exception to this. Since
throw and
catch are reserverd, I am renaming them to
throwy and
catchy:
scala> :paste // Entering paste mode (ctrl-D to finish) sealed trait FixE[F[_], E] object FixE { case class Fix[F[_], E](f: F[FixE[F, E]]) extends FixE[F, E] case class Throwy[F[_], E](e: E) extends FixE[F, E] def fix[E](toy: CharToy[FixE[CharToy, E]]): FixE[CharToy, E] = Fix[CharToy, E](toy) def throwy[F[_], E](e: E): FixE[F, E] = Throwy(e) def catchy[F[_]: Functor, E1, E2](ex: => FixE[F, E1]) (f: E1 => FixE[F, E2]): FixE[F, E2] = ex match { case Fix(x) => Fix[F, E2](Functor[F].map(x) {catchy(_)(f)}) case Throwy(e) => f(e) } } // Exiting paste mode, now interpreting.
We can only use this if Toy b is a functor, so we muddle around until we find something that type-checks (and satisfies the Functor laws).
Let’s define
Functor for
CharToy:
scala>() } } charToyFunctor: scalaz.Functor[CharToy] = $anon$1@7bc135fe
Here’s the sample usage:
scala> :paste // Entering paste mode (ctrl-D to finish) import FixE._ case class IncompleteException() def subroutine = fix[IncompleteException]( output('A', throwy[CharToy, IncompleteException](IncompleteException()))) def program = catchy[CharToy, IncompleteException, Nothing](subroutine) { _ => fix[Nothing](bell(fix[Nothing](done))) }
The fact that we need to supply type parameters everywhere is a bit unfortunate.
returnwas our
Throw, and
(>>=)was our
catch. re-implement
CharToy commands based on
Free:() } } def output(a: Char): Free[CharToy, Unit] = Free.Suspend(CharOutput(a, Free.Return[CharToy, Unit](()))) def bell: Free[CharToy, Unit] = Free.Suspend(CharBell(Free.Return[CharToy, Unit](()))) def done: Free[CharToy, Unit] = Free.Suspend(CharDone()) } // Exiting paste mode, now interpreting. defined trait CharToy defined module CharToy
I’ll be damned if that’s not a common pattern we can abstract.
Let’s add
liftF refactoring. We also need a
return equivalent, which we’ll call
pointed.() } } private def liftF[F[+_]: Functor, R](command: F[R]): Free[F, R] = Free.Suspend[F, R](Functor[F].map(command) { Free.Return[F, R](_) }) def output(a: Char): Free[CharToy, Unit] = liftF[CharToy, Unit](CharOutput(a, ())) def bell: Free[CharToy, Unit] = liftF[CharToy, Unit](CharBell(())) def done: Free[CharToy, Unit] = liftF[CharToy, Unit](CharDone()) def pointed[A](a: A) = Free.Return[CharToy, A](a) } // Exiting paste mode, now interpreting.
Here’s the command sequence:
scala> import CharToy._ import CharToy._ scala> val subroutine = output('A') subroutine: scalaz.Free[CharToy,Unit] = Suspend(CharOutput(A,Return(()))) scala> val program = for { _ <- subroutine _ <- bell _ <- done } yield () program: scalaz.Free[CharToy,Unit] = Gosub(<function0>,<function1>)
This is where things get magical. We now have
donotation for something that hasn’t even been interpreted yet: it’s pure data.
Next we’d like to define
showProgram to prove that what we have is just data. WFMM defines
showProgram using simple pattern matching, but it doesn’t quite work that way for our[R: Show](p: Free[CharToy, R]): String = p.resume.fold({ case CharOutput(a, next) => "output " + Show[Char].shows(a) + "\n" + showProgram(next) case CharBell(next) => "bell " + "\n" + showProgram(next) case CharDone() => "done\n" }, { r: R => "return " + Show[R].shows(r) + "\n" }) showProgram: [R](p: scalaz.Free[CharToy,R])(implicit evidence$1: scalaz.Show[R])String scala> showProgram(program) res12: String = "output A bell done "
Here’s the pretty printer:
scala> def pretty[R: Show](p: Free[CharToy, R]) = print(showProgram(p)) pretty: [R](p: scalaz.Free[CharToy,R])(implicit evidence$1: scalaz.Show[R])Unit scala> pretty(output('A')) output A return ()
Now is the moment of truth. Does this monad generated using
Free satisfy monad laws?
scala> pretty(output('A')) output A return () scala> pretty(pointed('A') >>= output) output A return () scala> pretty(output('A') >>= pointed) output A return () scala> pretty((output('A') >> done) >> output('C')) output A done scala> pretty(output('A') >> (done >> output('C'))) output A done
Looking good. Also notice the “abort” semantics of
done..
It’s no secret that some of the fundamentals of Scalaz and Haskell like Monoid and Functor comes from category theory. Let’s try studying category theory and see if we can use the knowledge to further our understanding of Scalaz.
The:
CM:
One very useful sort of set is a ‘singleton’ set, a set with exactly one element. Fix one of these, say
{me}, and call this set ’1‘.
Definition: A point of a set X is an arrows 1 => X. … (If A is some familiar set, an arrow from A to X is called an ’A-element’ of X; thus ’1-elements’ are points.) Since a point is an arrow, we can compose it with another arrow, and get a point again.
If I understand what’s going on, it seems like CM is redefining the concept of the element as a special case of arrow. Another name for singleton is unit set, and in Scala it is
(): Unit. So it’s analogous to saying that values are sugar for
Unit => X.
scala> val johnPoint: Unit => Person = { case () => John } johnPoint: Unit => Person = <function1> scala> favoriteBreakfast compose johnPoint res1: Unit => Breakfast = <function1> scala> res1(()) res2: Breakfast = Eggs
First-class functions in programming languages that support fp treat functions as values, which allows higher-order functions. Category theory unifies on the other direction by treating values as functions.
Session 2 and 3 contain nice review of Article I, so you should read them if you own the book.
One part in the sessions that I thought was interesting was about the equality of arrows. Many of the discussions in category theory involves around equality of arrows, but how we test if an arrow f is equal to g?
Two maps are equal when they have the same three ingredients:
Because of 1, we can test for equality of arrows of sets f: A => B and g: A => B using this test:
If for each point a: 1 => A, f ∘ a = g ∘ a, then f = g.
This reminds me of scalacheck. Let’s try implementing a check for
f: Person => Breakfast:
scala> :paste // Entering paste mode (ctrl-D to finish) sealed trait Person {} case object John extends Person {} case object Mary extends Person {} case object Sam extends Person {} sealed trait Breakfast {} case object Eggs extends Breakfast {} case object Oatmeal extends Breakfast {} case object Toast extends Breakfast {} case object Coffee extends Breakfast {} val favoriteBreakfast: Person => Breakfast = { case John => Eggs case Mary => Coffee case Sam => Coffee } val favoritePerson: Person => Person = { case John => Mary case Mary => John case Sam => Mary } val favoritePersonsBreakfast = favoriteBreakfast compose favoritePerson // Exiting paste mode, now interpreting. scala> import org.scalacheck.{Prop, Arbitrary, Gen} import org.scalacheck.{Prop, Arbitrary, Gen} scala> def arrowEqualsProp(f: Person => Breakfast, g: Person => Breakfast) (implicit ev1: Equal[Breakfast], ev2: Arbitrary[Person]): Prop = Prop.forAll { a: Person => f(a) === g(a) } arrowEqualsProp: (f: Person => Breakfast, g: Person => Breakfast) (implicit ev1: scalaz.Equal[Breakfast], implicit ev2: org.scalacheck.Arbitrary[Person])org.scalacheck.Prop scala> implicit val arbPerson: Arbitrary[Person] = Arbitrary { Gen.oneOf(John, Mary, Sam) } arbPerson: org.scalacheck.Arbitrary[Person] = org.scalacheck.Arbitrary$$anon$2@41ec9951 scala> implicit val breakfastEqual: Equal[Breakfast] = Equal.equalA[Breakfast] breakfastEqual: scalaz.Equal[Breakfast] = scalaz.Equal$$anon$4@783babde scala> arrowEqualsProp(favoriteBreakfast, favoritePersonsBreakfast) res0: org.scalacheck.Prop = Prop scala> res0.check ! Falsified after 1 passed tests. > ARG_0: John scala> arrowEqualsProp(favoriteBreakfast, favoriteBreakfast) res2: org.scalacheck.Prop = Prop scala> res2.check + OK, passed 100 tests.
We can generalize
arrowEqualsProp a bit:
scala> def arrowEqualsProp[A, B](f: A => B, g: A => B) (implicit ev1: Equal[B], ev2: Arbitrary[A]): Prop = Prop.forAll { a: A => f(a) === g(a) } arrowEqualsProp: [A, B](f: A => B, g: A => B) (implicit ev1: scalaz.Equal[B], implicit ev2: org.scalacheck.Arbitrary[A])org.scalacheck.Prop scala> arrowEqualsProp(favoriteBreakfast, favoriteBreakfast) res4: org.scalacheck.Prop = Prop scala> res4.check + OK, passed 100 tests.
CM:
Definitions: An arrow f: A => B is called an isomorphism, or invertible arrow, if there is a map g: B => A, for which g ∘ f = 1A and f ∘ g = 1B. An arrow g related to f by satisfying these equations is called an inverse for f. Two objects A and B are said to be isomorphic if there is at least one isomorphism f: A => B.
In Scalaz you can express this using the traits defined in
Isomorphism.
sealed abstract class Isomorphisms extends IsomorphismsLow0{ /**Isomorphism for arrows of kind * -> * -> * */ trait Iso[Arr[_, _], A, B] { self => def to: Arr[A, B] def from: Arr[B, A] } /**Set isomorphism */ type IsoSet[A, B] = Iso[Function1, A, B] /**Alias for IsoSet */ type <=>[A, B] = IsoSet[A, B] } object Isomorphism extends Isomorphisms
It also contains isomorphism for higher kinds, but
IsoSet would do for now.
scala> :paste // Entering paste mode (ctrl-D to finish) sealed trait Family {} case object Mother extends Family {} case object Father extends Family {} case object Child extends Family {} sealed trait Relic {} case object Feather extends Relic {} case object Stone extends Relic {} case object Flower extends Relic {} import Isomorphism.<=> val isoFamilyRelic = new (Family <=> Relic) { val to: Family => Relic = { case Mother => Feather case Father => Stone case Child => Flower } val from: Relic => Family = { case Feather => Mother case Stone => Father case Flower => Child } } isoFamilyRelic: scalaz.Isomorphism.<=>[Family,Relic]{val to: Family => Relic; val from: Relic => Family} = 1@12e3914c
It’s encouraging to see support for isomorphisms in Scalaz. Hopefully we are going the right direction.
Notation: If f: A => B has an inverse, then the (one and only) inverse for f is denoted by the symbol f-1 (read ’f-inverse’ or ‘the inverse of f‘.)
We can check if the above
isoFamilyRelic satisfies the definition using
arrowEqualsProp.
scala> :paste // Entering paste mode (ctrl-D to finish) implicit val familyEqual = Equal.equalA[Family] implicit val relicEqual = Equal.equalA[Relic] implicit val arbFamily: Arbitrary[Family] = Arbitrary { Gen.oneOf(Mother, Father, Child) } implicit val arbRelic: Arbitrary[Relic] = Arbitrary { Gen.oneOf(Feather, Stone, Flower) } // Exiting paste mode, now interpreting. scala> arrowEqualsProp(isoFamilyRelic.from compose isoFamilyRelic.to, identity[Family] _) res22: org.scalacheck.Prop = Prop scala> res22.check + OK, passed 100 tests. scala> arrowEqualsProp(isoFamilyRelic.to compose isoFamilyRelic.from, identity[Relic] _) res24: org.scalacheck.Prop = Prop scala> res24.check + OK, passed 100 tests.
CM:
1. The ‘determination’ (or ‘extension’) problem
Given f and h as shown, what are all g, if any, for which h = g ∘ f?
2. The ‘choice’ (or ‘lifting’) problem
Given g and h as shown, what are all g, if any, for which h = g ∘ f?
These two notions are analogous to division problem.
Definitions: If f: A => B:
- a retraction for f is an arrow r: B => A for which r ∘ f = 1A
- a section for f is an arrow s: B => A for which f ∘ s = 1B
Here’s the external diagram for retraction problem:
and one for section problem:
If an arrow f: A => B satisfies the property ‘for any y: T => B there exists an x: T => A such that f ∘ x = y‘, it is often said to be ‘surjective for arrows from T.’
I came up with my own example to think about what surjective means in set theory:
Suppose John and friends are on their way to India, and they are given two choices for their lunch in the flight: chicken wrap or spicy chick peas. Surjective means that given a meal, you can find at least one person who chose the meal. In other words, all elements in codomain are covered.
Now recall that we can generalize the concept of elements by introducing singleton explicitly.
Compare this to the category theory’s definition of surjective: for any y: T => B there exists an x: T => A such that f ∘ x = y. For any arrow going from 1 to B (lunch), there is an arrow going from 1 to A (person) such that f ∘ x = y. In other words, f is surjective for arrows from 1.
Let’s look at this using an external diagram.
This is essentially the same diagram as the choice problem.
Definitions: An arrow f satisfying the property ‘for any pair of arrows x1: T => A and x2: T => A, if f ∘ x1 = f ∘ x2 then x1 = x2‘, it is said to be injective for arrows from T.
If f is injective for arrows from T for every T, one says that f is injective, or is a monomorphism.
Here’s how injective would mean in terms of sets:
All elements in codomain are mapped only once. We can imagine a third object T, which maps to John, Mary, and Sam. Any of the composition would still land on a unique meal. Here’s the external diagram:
Definition: An arrow f with this cancellation property ‘if t1 ∘ f = t2 ∘ f then t1 = t2’ for every T is called an epimorphism.
Apparently, this is a generalized form of surjective, but the book doesn’t go into detail, so I’ll skip over.
Definition: An endomorphism e is called idempotent if e ∘ e = e.
An arrow, which is both an endomorphism and at the same time an isomorphism, usually called by one word automorphism.
I think we’ve covered enough ground. Breaking categories apart into internal diagrams really helps getting the hang of it.
On day 19 we started looking at basic concepts in category theory using Lawvere and Schanuel’s Conceptual Mathematics: A First Introduction to Categories. The book is a good introduction book into the notion of category since it spends a lot of pages explaining the basic concepts using concrete examples. The very aspect gets a bit annoying when you want to move on to more advanced concept, since it’s goes winding around.
Today I’m switching to Steve Awodey’s Category Theory. This is also a book written for non-mathematicians, but goes at faster pace, and more emphasis is placed on thinking in abstract terms.
A particular definition or a theorem is called abstract, when it relies only on category theoric notions, rather than some additional information about the objects and arrows. The advantage of an abstract notion is that it applies in any category immediately.
Definition 1.3 In any category C, an arrow f: A => B is called an isomorphism, if there is an arrow g: B => A in C such that:
g ∘ f = 1A and f ∘ g = 1B.
Awodey names the above definition to be an abstract notion as it does make use only of category theoric notion.
Extending this to Scalaz, learning the nature of an abtract typeclass has the advantage of it applying in all concrete data structures that support it.
Before we go abtract, we’re going to look at some more concrete categories. This is actually a good thing, since we only saw one category yesterday.
The category of sets and total functions are denoted by Sets written in bold.
The category of all finite sets and total functions between them are called Setsfin. This is the category we have been looking at so far.
Awodey:
Another kind of example one often sees in mathematics is categories of structured sets, that is, sets with some further “structure” and functions that “preserve it,” where these notions are determined in some independent way.
A partially ordered set or poset is a set A equipped with a binary relation a ≤A b such that the following conditions hold for all a, b, c ∈ A:
- reflexivity: a ≤A a
- transitivity: if a ≤A b and b ≤A c, then a ≤A c
- antisymmetry: if a ≤A b and b ≤A a, then a = b
An arrow from a poset A to a poset B is a function m: A => B that is monotone, in the sense that, for all a, a’ ∈ A,
- a ≤A a’ implies m(a) ≤A m(a’).
As long as the functions are monotone, the objects will continue to be in the category, so the “structure” is preserved. The category of posets and monotone functions is denoted as Pos. Awodey likes posets, so it’s important we understand it.
Definition 1.2. A functor
F: C => D
between categories C and D is a mapping of objects to objects and arrows to arrows, in such a way that.
- F(f: A => B) = F(f): F(A) => F(B)
- F(1A) = 1F(A)
- F(g ∘ f) = F(g) ∘ F(f)
That is, F, preserves domains and codomains, identity arrows, and composition.
Now we are talking. Functor is an arrow between two categories. Here’s the external diagram:
The fact that the positions of F(A), F(B), and F(C) are distorted is intentional. That’s what F is doing, slightly distorting the picture, but still preserving the composition.
This category of categories and functors is denoted as Cat.
A monoid (sometimes called a semigroup with unit) is a set M equipped with a binary operation ·: M × M => M and a distinguished “unit” element u ∈ M such that for all x, y, z ∈ M,
- x · (y · z) = (x · y) · z
- u · x = x = x · u
Equivalently, a monoid is a category with just one object. The arrows of the category are the elements of the monoid. In particular, the identity arrow is the unit element u. Composition of arrows is the binary operation m · n for the monoid.
The concept of monoid translates well into Scalaz. You can check out About those Monoids from day 3.
trait Monoid[A] extends Semigroup[A] { self => //// /** The identity element for `append`. */ def zero: A ... } trait Semigroup[A] { self => def append(a1: A, a2: => A): A ... }
Here is addition of
Int and
0:
scala> 10 |+| Monoid[Int].zero res26: Int = 10
and multiplication of
Int and
1:
scala> Tags.Multiplication(10) |+| Monoid[Int @@ Tags.Multiplication].zero res27: scalaz.@@[Int,scalaz.Tags.Multiplication] = 10
The idea that these monoids are categories with one object and that elements are arrows used to sound so alien to me, but now it’s understandable since we were exposed to singleton.
The category of monoids and functions that preserve the monoid structure is denoted by Mon. These arrows that preserve structure are called homomorphism.
In detail, a homomorphism from a monoid M to a monoid N is a function h: M => N such that for all m, n ∈ M,
- h(m ·M n) = h(m) ·N h(n)
- h(uM) = uN
Since a monoid is a category, a monoid homomorphism is a special case of functors.
Definition 1.4 A group G is a monoid with an inverse g-1 for every element g. Thus, G is a category with one object, in which every arrow is an isomorphism.
The category of groups and group homomorphisms is denoted as Groups.
Scalaz used to have groups, but it was removed about an year ago in #279, which says it’s removing duplication with Spire.
Let’s look at something abstract. When a definition relies only on category theoretical notion (objects and arrows), it often reduces down to a form “given a diagram abc, there exists a unique x that makes another diagram xyz commute.” Commutative in this case mean that all the arrows compose correctly.Those defenitions are called universal property or universal mapping property (UMP).
Some of the notions have a counterpart in set theory, but it’s more powerful because of its abtract nature. Consider making the empty set and the one-element sets in Sets abstract.
Definition 2.9. In any category C, an object
- 0 is initial if for any object C there is a unique morphism
0 => C
- 1 is terminal if for any object C there is a unique morphism
C => 1
As a general rule, the uniqueness requirements of universal mapping properties are required only up to isomorphisms. Another way of looking at it is that if objects A and B are isomorphic to each other, they are “equal in some sense.” To signify this, we write A ≅ B.
Proposition 2.10 Initial (terminal) objects are unique up to isomorphism.
Proof. In fact, if C and C’ are both initial (terminal) in the same category, then there’s a unique isomorphism C => C’. Indeed, suppose that 0 and 0’ are both initial objects in some category C; the following diagram then makes it clear that 0 and 0’ are uniquely isomorphic:
Given that isomorphism is defined by g ∘ f = 1A and f ∘ g = 1B, this looks good.
In Sets, the empty set is initial and any singleton set {x} is terminal.
So apparently there’s a concept called an empty function that maps from an empty set to any set.
In a poset, an object is plainly initial iff it is the least element, and terminal iff it is the greatest element.
This kind of makes sense, since in a poset we need to preserve the structure using ≤.
There are many other examples, but the interesting part is that seemingly unrelated concepts share the same structure.
Let us begin by considering products of sets. Given sets A and B, the cartesian product of A and B is the set of ordered pairs
A × B = {(a, b)| a ∈ A, b ∈ B}
There are two coordinate projections:
with:
This notion of product relates to scala.Product, which is the base trait for all tuples and case classes.
For any element in c ∈ A × B, we have c = (fst ∘ c, snd ∘ c)
Using the same trick as yesterday, we can introduce the singleton explicitly:
The (external) diagram captures what we stated in the above. If we replace 1-elements by generalized elements, we get the categorical definition..
Because this is universal, this applies to any category.
UMP also suggests that all products of A and B are unique up to isomorphism.
Proposition 2.17 Products are unique up to isomorphism.
Suppose we have P and Q that are products of objects A and B.
Since all products are isometric, we can just denote one as A × B, and the arrow u: X => A × B is denoted as ⟨x1, x2⟩.
We saw that in Sets, cartesian product is a product.
Let P be a poset and consider a product of elements p, q ∈ P. We must have projections
- p × q ≤ p
- p × q ≤ q
and if for any element x, x ≤ p, and x ≤ q
then we need
- x ≤ p × q
In this case, p × q becomes greatest lower bound.
Before we get into duality, we need to cover the concept of generating a category out of an existing one. Note that we are no longer talking about objects, but a category, which includes objects and arrows.
The opposite (or “dual”) category Cop of a category C has the same objects as C, and an arrow f: C => D in Cop is an arrow f: D => C in C. That is, Cop is just C with all of the arrows formally turned around.
If we take the concept further, we can come up with “dual statement” Σ* by substituting any sentence Σ in the category theory by replacing the following:
Since there’s nothing semantically important about which side is f or g, the dual statement also holds true as long as Σ only relies on category theory. In other words, any proof that holds for one concept also holds for its dual. This is called the duality principle.
Another way of looking at it is that if Σ holds in all C, it should also hold in Cop, and so Σ* should hold in (Cop)op, which is C.
Let’s look at the definitions of initial and terminal again:
Definition 2.9. In any category C, an object
- 0 is initial if for any object C there is a unique morphism
0 => C
- 1 is terminal if for any object C there is a unique morphism
C => 1
They are dual to each other, so the initials in C are terminals in Cop.
Recall proof for “the initial objects are unique up to isomorphism.”
If you flip the direction of all arrows in the above diagram, you do get a proof for terminals.
This is pretty cool. Let’s continue from here later.
On day 20 we continued to look into concepts from category theory, but using Awodey as the guide with more enphasis on thinking in abstract terms. In particuar, I was aiming towards the notion of duality, which says that an abtract concept in category theory should hold when you flip the direction of all the arrows.
One of the well known dual concepts is coproduct, which is the dual of product. Prefixing with “co-” is the convention to name duals.
Here’s the definition of products again:..
We need to pick up some of the fundamentals that I skipped over.
Definition 1.11. A category C is called small if both the collection C0 of objects of C and the collection C1 of arrows of C are sets. Otherwise, C is called large.
For example, all finite categories are clearly small, as is the category Setsfin of finite sets and functions.
Cat is actually a category of all small categories, so Cat doesn’t contain itself.
Definition 1.12. A category C is called locally small if for all objects X, Y in C, the collection HomC(X, Y) = { f ∈ C1 | f: X = Y } is a set (called a hom-set)
A Hom-set Hom(A, B) is a set of arrows between objects A and B. Hom-sets are useful because we can use it to inspect (look into the elements) an object using just arrows.
Putting any arrow f: A => B in C into Hom(X, A) would create a function:
Thus, Hom(X, f) = f ∘
_.
By using the singleton trick in Sets, we can exploit A ≅ HomSets(1, A). If we generalize this we can think of Hom(X, A) as a set of generalized elements from X.
We can then create a functor out of this by replacing A with
_
Hom(X,
_): C => Sets.
This functor is called the representable functor, or covariant hom-functor.
For any object P, a pair of arrows p1: P => A and p2: P => B determine an element (p1, p2) of the set
Hom(P, A) × Hom(P, B).
We see that given x: X => P we can derive x1 and x2 by composing with p1 and p2 respectively. Because compositions are functions in Hom sets, we could express the above as a function too:
ϑX = (Hom(X, p1), Hom(X, p2)): Hom(X, P) => Hom(X, A) × Hom(X, B)
where ϑX(x) = (x1, x2)
That’s a cursive theta, by the way.
Proposition 2.20. A diagram of the form
is a product for A and B iff for every object X, the canonical function ϑX given in (2.1) is an isomorphism,
ϑX: Hom(X, P) ≅ Hom(P, A) × Hom(P, B).
This is pretty interesting because we just replaced a diagram with an isomorphic equation.
I think we now have enough ammunition on our hands to tackle naturality. Let’s skip to the middle of the book, section 7.4.
A natural transformation is a morphism of functors. That is right: for fix categories C and D, we can regard the functors C => D as the object of a new category, and the arrows between these objects are what we are going to call natural transformations.
There are some interesting blog posts around natural transformation in Scala:
Mark presents a simple example of why we might want a natural transformation:
We run into problems when we proceed to natural transformations. We are not able to define a function that maps an
Option[T]to
List[T]for every
T, for example. If this is not obvious, try to define
toListso that the following compiles:
val toList = ... val a: List[Int] = toList(Some(3)) assert(List(3) == a) val b: List[Boolean] = toList(Some(true)) assert(List(true) == b)
In order to define a natural transformation
M ~> N(here, M=Option, N=List), we have to create an anonymous class because Scala doesn’t have literals for quantified functions.
Scalaz ports this. Let’s see NaturalTransformation:
/** A universally quantified function, usually written as `F ~> G`, * for symmetry with `A => B`. * .... */ trait NaturalTransformation[-F[_], +G[_]] { self => def apply[A](fa: F[A]): G[A] .... }
The aliases are available in the package object for
scalaz namespace:
/** A [[scalaz.NaturalTransformation]][F, G]. */ type ~>[-F[_], +G[_]] = NaturalTransformation[F, G] /** A [[scalaz.NaturalTransformation]][G, F]. */ type <~[+F[_], -G[_]] = NaturalTransformation[G, F]
Let’s try defining
toList:
scala> val toList = new (Option ~> List) { def apply[T](opt: Option[T]): List[T] = opt.toList } toList: scalaz.~>[Option,List] = 1@2fdb237 scala> toList(3.some) res17: List[Int] = List(3) scala> toList(true.some) res18: List[Boolean] = List(true)
If we compare the terms with category theory, in Scalaz the type constructors like
List and
Option support
Functors which
maps between two categories.
trait Functor[F[_]] extends InvariantFunctor[F] { self => //// /** Lift `f` into `F` and apply to `F[A]`. */ def map[A, B](fa: F[A])(f: A => B): F[B] ... }
This is a much contrained representation of a functor compared to more general C => D, but it’s a functor if we think of the type constructors as categories.
Since
NaturalTransformation (
~>) works at type constructor (first-order kinded type) level, it is an arrow between the functors (or a family of arrows between the categories).
We’ll continue from here later.
This page is a placeholder for the end, but I’ll be updating this series every now and then.! Here are the top 10 from the list:
It was fun learning functional programming through Scalaz, and I hope the learning continues. Oh yea, don’t forget the Scalaz cheat sheet too.
def equal(a1: A, a2: A): Boolean (1 === 2) assert_=== false (2 =/= 1) assert_=== true
def order(x: A, y: A): Ordering 1.0 ?|? 2.0 assert_=== Ordering.LT 1.0 lt 2.0 assert_=== true 1.0 gt 2.0 assert_=== false 1.0 lte 2.0 assert_=== true 1.0 gte 2.0 assert_=== false 1.0 max 2.0 assert_=== 2.0 1.0 min 2.0 assert_=== 1.0
def show(f: A): Cord 1.0.show assert_=== Cord("1.0") 1.0.shows assert_=== "1.0" 1.0.print assert_=== () 1.0.println assert_=== ()
def pred(a: A): A def succ(a: A): A 1.0 |-> 2.0 assert_=== List(1.0, 2.0) 1.0 |--> (2, 5) assert_=== List(1.0, 3.0, 5.0) // |=>/|==>/from/fromStep return EphemeralStream[A] (1.0 |=> 2.0).toList assert_=== List(1.0, 2.0) (1.0 |==> (2, 5)).toList assert_=== List(1.0, 3.0, 5.0) (1.0.from take 2).toList assert_=== List(1.0, 2.0) ((1.0 fromStep 2) take 2).toList assert_=== List(1.0, 3.0) 1.0.pred assert_=== 0.0 1.0.predx assert_=== Some(0.0) 1.0.succ assert_=== 2.0 1.0.succx assert_=== Some(2.0) 1.0 -+- 1 assert_=== 2.0 1.0 --- 1 assert_=== 0.0 Enum[Int].min assert_=== Some(-2147483648) Enum[Int].max assert_=== Some(2147483647)
def append(a1: A, a2: => A): A List(1, 2) |+| List(3) assert_=== List(1, 2, 3) List(1, 2) mappend List(3) assert_=== List(1, 2, 3) 1 |+| 2 assert_=== 3 (Tags.Multiplication(2) |+| Tags.Multiplication(3): Int) assert_=== 6 // Tags.Disjunction (||), Tags.Conjunction (&&) (Tags.Disjunction(true) |+| Tags.Disjunction(false): Boolean) assert_=== true (Tags.Conjunction(true) |+| Tags.Conjunction(false): Boolean) assert_=== false (Ordering.LT: Ordering) |+| (Ordering.GT: Ordering) assert_=== Ordering.LT (none: Option[String]) |+| "andy".some assert_=== "andy".some (Tags.First('a'.some) |+| Tags.First('b'.some): Option[Char]) assert_=== 'a'.some (Tags.Last('a'.some) |+| Tags.Last(none: Option[Char]): Option[Char]) assert_=== 'a'.some
def zero: A mzero[List[Int]] assert_=== Nil
def map[A, B](fa: F[A])(f: A => B): F[B] List(1, 2, 3) map {_ + 1} assert_=== List(2, 3, 4) List(1, 2, 3) ∘ {_ + 1} assert_=== List(2, 3, 4) List(1, 2, 3) >| "x" assert_=== List("x", "x", "x") List(1, 2, 3) as "x" assert_=== List("x", "x", "x") List(1, 2, 3).fpair assert_=== List((1,1), (2,2), (3,3)) List(1, 2, 3).strengthL("x") assert_=== List(("x",1), ("x",2), ("x",3)) List(1, 2, 3).strengthR("x") assert_=== List((1,"x"), (2,"x"), (3,"x")) List(1, 2, 3).void assert_=== List((), (), ()) Functor[List].lift {(_: Int) * 2} (List(1, 2, 3)) assert_=== List(2, 4, 6)
def ap[A,B](fa: => F[A])(f: => F[A => B]): F[B] 1.some <*> {(_: Int) + 2}.some assert_=== Some(3) // except in 7.0.0-M3 1.some <*> { 2.some <*> {(_: Int) + (_: Int)}.curried.some } assert_=== 3.some 1.some <* 2.some assert_=== 1.some 1.some *> 2.some assert_=== 2.some Apply[Option].ap(9.some) {{(_: Int) + 3}.some} assert_=== 12.some Apply[List].lift2 {(_: Int) * (_: Int)} (List(1, 2), List(3, 4)) assert_=== List(3, 4, 6, 8) (3.some |@| 5.some) {_ + _} assert_=== 8.some // ^(3.some, 5.some) {_ + _} assert_=== 8.some
def point[A](a: => A): F[A] 1.point[List] assert_=== List(1) 1.η[List] assert_=== List(1)
(Applicative[Option] product Applicative[List]).point(0) assert_=== (0.some, List(0)) (Applicative[Option] compose Applicative[List]).point(0) assert_=== List(0).some
def bind[A, B](fa: F[A])(f: A => F[B]): F[B] 3.some flatMap { x => (x + 1).some } assert_=== 4.some (3.some >>= { x => (x + 1).some }) assert_=== 4.some 3.some >> 4.some assert_=== 4.some List(List(1, 2), List(3, 4)).join assert_=== List(1, 2, 3, 4)
// no contract function // failed pattern matching produces None (for {(x :: xs) <- "".toList.some} yield x) assert_=== none (for { n <- List(1, 2); ch <- List('a', 'b') } yield (n, ch)) assert_=== List((1, 'a'), (1, 'b'), (2, 'a'), (2, 'b')) (for { a <- (_: Int) * 2; b <- (_: Int) + 10 } yield a + b)(3) assert_=== 19 List(1, 2) filterM { x => List(true, false) } assert_=== List(List(1, 2), List(1), List(2), List())
def plus[A](a: F[A], b: => F[A]): F[A] List(1, 2) <+> List(3, 4) assert_=== List(1, 2, 3, 4)
def empty[A]: F[A] (PlusEmpty[List].empty: List[Int]) assert_=== Nil
// no contract function
// no contract function List(1, 2, 3) filter {_ > 2} assert_=== List(3)
def foldMap[A,B](fa: F[A])(f: A => B)(implicit F: Monoid[B]): B def foldRight[A, B](fa: F[A], z: => B)(f: (A, => B) => B): B List(1, 2, 3).foldRight (0) {_ + _} assert_=== 6 List(1, 2, 3).foldLeft (0) {_ + _} assert_=== 6 (List(1, 2, 3) foldMap {Tags.Multiplication}: Int) assert_=== 6 List(1, 2, 3).foldLeftM(0) { (acc, x) => (acc + x).some } assert_=== 6.some
def traverseImpl[G[_]:Applicative,A,B](fa: F[A])(f: A => G[B]): G[F[B]] List(1, 2, 3) traverse { x => (x > 0) option (x + 1) } assert_=== List(2, 3, 4).some List(1, 2, 3) traverseU {_ + 1} assert_=== 9 List(1.some, 2.some).sequence assert_=== List(1, 2).some 1.success[String].leaf.sequenceU map {_.drawTree} assert_=== "1\n".success[String]
def length[A](fa: F[A]): Int List(1, 2, 3).length assert_=== 3
def index[A](fa: F[A], i: Int): Option[A] List(1, 2, 3) index 2 assert_=== 3.some List(1, 2, 3) index 3 assert_=== none
def id[A]: A =>: A
def compose[A, B, C](f: B =>: C, g: A =>: B): (A =>: C) val f1 = (_:Int) + 1 val f2 = (_:Int) * 100 (f1 >>> f2)(2) assert_=== 300 (f1 <<< f2)(2) assert_=== 201
// no contract function
def arr[A, B](f: A => B): A =>: B def first[A, B, C](f: (A =>: B)): ((A, C) =>: (B, C)) val f1 = (_:Int) + 1 val f2 = (_:Int) * 100 (f1 *** f2)(1, 2) assert_=== (2, 200) (f1 &&& f2)(1) assert_=== (2,100)
_[
_]], MA]
type M[_] type A def TC: TC[M] def apply(ma: MA): M[A] implicitly[Unapply[Applicative, Int => Int]].TC.point(0).asInstanceOf[Int => Int](10) assert_=== Applicative[({type l[x]=Function1[Int, x]})#l].point(0)(10) List(1, 2, 3) traverseU {(x: Int) => {(_:Int) + x}} apply 1 assert_=== List(2, 3, 4) // traverse won't work
false /\ true assert_=== false // && false \/ true assert_=== true // || (1 < 10) option 1 assert_=== 1.some (1 > 10)? 1 | 2 assert_=== 2 (1 > 10)?? {List(1)} assert_=== Nil
1.some assert_=== Some(1) none[Int] assert_=== (None: Option[Int]) 1.some? 'x' | 'y' assert_=== 'x' 1.some | 2 assert_=== 1 // getOrElse
// no contract function 1 + 2 + 3 |> {_ * 6} 1 visit { case x@(2|3) => List(x * 2) }
sealed trait KiloGram def KiloGram[A](a: A): A @@ KiloGram = Tag[A, KiloGram](a) def f[A](mass: A @@ KiloGram): A @@ KiloGram
val tree = 'A'.node('B'.leaf, 'C'.node('D'.leaf), 'E'.leaf) (tree.loc.getChild(2) >>= {_.getChild(1)} >>= {_.getLabel.some}) assert_=== 'D'.some (tree.loc.getChild(2) map {_.modifyLabel({_ => 'Z'})}).get.toTree.drawTree assert_=== 'A'.node('B'.leaf, 'Z'.node('D'.leaf), 'E'.leaf).drawTree
(Stream(1, 2, 3, 4).toZipper >>= {_.next} >>= {_.focus.some}) assert_=== 2.some (Stream(1, 2, 3, 4).zipperEnd >>= {_.previous} >>= {_.focus.some}) assert_=== 3.some (for { z <- Stream(1, 2, 3, 4).toZipper; n1 <- z.next } yield { n1.modify {_ => 7} }) map { _.toStream.toList } getOrElse Nil assert_=== List(1, 7, 3, 4) unfold(3) { x => (x =/= 0) option (x, x - 1) }.toList assert_=== List(3, 2, 1)
DList.unfoldr(3, { (x: Int) => (x =/= 0) option (x, x - 1) }).toList assert_=== List(3, 2, 1)
val t0 = Turtle(Point(0.0, 0.0), 0.0) val t1 = Turtle(Point(1.0, 0.0), 0.0) val turtlePosition = Lens.lensu[Turtle, Point] ( (a, value) => a.copy(position = value), _.position) val pointX = Lens.lensu[Point, Double] ( (a, value) => a.copy(x = value), _.x) val turtleX = turtlePosition >=> pointX turtleX.get(t0) assert_=== 0.0 turtleX.set(t0, 5.0) assert_=== Turtle(Point(5.0, 0.0), 0.0) turtleX.mod(_ + 1.0, t0) assert_=== t1 t0 |> (turtleX =>= {_ + 1.0}) assert_=== t1 (for { x <- turtleX %= {_ + 1.0} } yield x) exec t0 assert_=== t1 (for { x <- turtleX := 5.0 } yield x) exec t0 assert_=== Turtle(Point(5.0, 0.0), 0.0) (for { x <- turtleX += 1.0 } yield x) exec t0 assert_=== t1
(1.success[String] |@| "boom".failure[Int] |@| "boom".failure[Int]) {_ |+| _ |+| _} assert_=== "boomboom".failure[Int] (1.successNel[String] |@| "boom".failureNel[Int] |@| "boom".failureNel[Int]) {_ |+| _ |+| _} assert_=== NonEmptyList("boom", "boom").failure[Int] "1".parseInt.toOption assert_=== 1.some
(for { x <- 1.set("log1"); _ <- "log2".tell } yield (x)).run assert_=== ("log1log2", 1) import std.vector._ MonadWriter[Writer, Vector[String]].point(1).run assert_=== (Vector(), 1)
1.right[String].isRight assert_=== true 1.right[String].isLeft assert_=== false 1.right[String] | 0 assert_=== 1 // getOrElse ("boom".left ||| 2.right) assert_=== 2.right // orElse ("boom".left[Int] >>= { x => (x + 1).right }) assert_=== "boom".left[Int] (for { e1 <- 1.right; e2 <- "boom".left[Int] } yield (e1 |+| e2)) assert_=== "boom".left[Int]
val k1 = Kleisli { (x: Int) => (x + 1).some } val k2 = Kleisli { (x: Int) => (x * 100).some } (4.some >>= k1 compose k2) assert_=== 401.some (4.some >>= k1 <=< k2) assert_=== 401.some (4.some >>= k1 andThen k2) assert_=== 500.some (4.some >>= k1 >=> k2) assert_=== 500.some
Reader { (_: Int) + 1 }
val memoizedFib: Int => Int = Memo.mutableHashMapMemo { case 0 => 0 case 1 => 1 case n => memoizedFib(n - 2) + memoizedFib(n - 1) }
State[List[Int], Int] { case x :: xs => (xs, x) }.run(1 :: Nil) assert_=== (Nil, 1) (for { xs <- get[List[Int]] _ <- put(xs.tail) } yield xs.head).run(1 :: Nil) assert_=== (Nil, 1)
import scalaz._, Scalaz._, effect._, ST._ type ForallST[A] = Forall[({type l[x] = ST[x, A]})#l] def e1[S]: ST[S, Int] = for { x <- newVar[S](0) _ <- x mod {_ + 1} r <- x.read } yield r runST(new ForallST[Int] { def apply[S] = e1[S] }) assert_=== 1 def e2[S]: ST[S, ImmutableArray[Boolean]] = for { arr <- newArr[S, Boolean](3, true) x <- arr.read(0) _ <- arr.write(0, !x) r <- arr.freeze } yield r runST(new ForallST[ImmutableArray[Boolean]] { def apply[S] = e2[S] })(0) assert_=== false
import scalaz._, Scalaz._, effect._, IO._ val action1 = for { x <- readLn _ <- putStrLn("Hello, " + x + "!") } yield () action1.unsafePerformIO
_], A]/EnumeratorT[O, I, F[
_]]
import scalaz._, Scalaz._, iteratee._, Iteratee._ (length[Int, Id] &= enumerate(Stream(1, 2, 3))).run assert_=== 3 (length[scalaz.effect.IoExceptionOr[Char], IO] &= enumReader[IO](new BufferedReader(new FileReader("./README.md")))).run.unsafePerformIO
_], +A]
import scalaz._, Scalaz._, Free._ type FreeMonoid[A] = Free[({type λ[+α] = (A,α)})#λ, Unit] def cons[A](a: A): FreeMonoid[A] = Suspend[({type λ[+α] = (A,α)})#λ, Unit]((a, Return[({type λ[+α] = (A,α)})#λ, Unit](()))) def toList[A](list: FreeMonoid[A]): List[A] = list.resume.fold( { case (x: A, xs: FreeMonoid[A]) => x :: toList(xs) }, { _ => Nil }) toList(cons(1) >>= {_ => cons(2)}) assert_=== List(1, 2)
import scalaz._, Scalaz._, Free._ def even[A](ns: List[A]): Trampoline[Boolean] = ns match { case Nil => return_(true) case x :: xs => suspend(odd(xs)) } def odd[A](ns: List[A]): Trampoline[Boolean] = ns match { case Nil => return_(false) case x :: xs => suspend(even(xs)) } even(0 |-> 3000).run assert_=== false
import scalaz._ // imports type names import scalaz.Id.Id // imports Id type alias import scalaz.std.option._ // imports instances, converters, and functions related to `Option` import scalaz.std.AllInstances._ // imports instances and converters related to standard types import scalaz.std.AllFunctions._ // imports functions related to standard types import scalaz.syntax.monad._ // injects operators to Monad import scalaz.syntax.all._ // injects operators to all typeclasses and Scalaz data types import scalaz.syntax.std.boolean._ // injects operators to Boolean import scalaz.syntax.std.all._ // injects operators to all standard types import scalaz._, Scalaz._ // all the above
type Function1Int[A] = ({type l[x]=Function1[Int, x]})#l[A] type Function1Int[A] = Function1[Int, A] | http://eed3si9n.com/learning-scalaz/Combined+Pages.html | CC-MAIN-2017-22 | refinedweb | 13,095 | 66.13 |
In this case, we will set up a 1000 km by 1000 km domain with three layers and study the spectral energy transfers that arise in a baroclinically unstable flow. The setup is listed below:
L = 1000.e3 # length scale of box [m] Ld = 15.e3 # deformation scale [m] kd = 1./Ld # deformation wavenumber [m^-1] Nx = 64 # number of grid points H1 = 500. # layer 1 thickness [m] H2 = 1750. # layer 2 H3 = 1750. # layer 3 U1 = 0.05 # layer 1 zonal velocity [m/s] U2 = 0.025 # layer 2 U3 = 0.00 # layer 3 rho1 = 1025. rho2 = 1025.275 rho3 = 1025.640 rek = 1.e-7 # linear bottom drag coeff. [s^-1] f0 = 0.0001236812857687059 # coriolis param [s^-1] beta = 1.2130692965249345e-11 # planetary vorticity gradient [m^-1 s^-1] Ti = Ld/(abs(U1)) # estimate of most unstable e-folding time scale [s] dt = Ti/200. # time-step [s] tmax = 300*Ti # simulation time [s]
Layer one is the thinnest, while the bottom two layers are equally thick. The layers increase in density with depth (as expected), and the zonal velocities decrease with depth. Time stepping parameters are set based on the estimate of the most unstable e-folding time scale, which is set by the Rossby radius of deformation and greatest velocity.
The model is constructed as such:
kwargs = {'nx': Nx, 'nz': 3, 'U': [U1,U2,U3], 'V': [0., 0., 0.], 'L': L, 'f': f0, 'beta': beta, 'H': [H1,H2,H3], 'rho': [rho1,rho2,rho3], 'rek': rek, 'dt': dt, 'tmax': tmax, 'twrite': 5000, 'tavestart': Ti*10} m = pyqg.LayeredModel(**kwargs)
Random initial conditions are assigned to the potential vorticity, and finally the model is run:
sig = 1.e-7 q1 = np.random.randn(m.nx,m.ny)[None,] q2 = np.random.randn(m.nx,m.ny)[None,] q3 = np.random.randn(m.nx,m.ny)[None,] qi = sig * np.vstack([q1, q2, q3]) m.set_q(qi) m.run()
Here is the picture of the PV distribution in the three layers before running:
And here is after:
When setting up the parameters, we gave the deformation scale as 15 km. pyqg has a method to compute the barotropic and baroclinic deformation radii. The baroclinic radii can be computed from the eigenvalues of the “stretching” matrix:
$$\mathbf{S} \vec{p}_n = -\frac{1}{R_n^2}\vec{p}_n$$
where \(R_n\) is the nth baroclinic deformation radius. In the two-layer case:
$$
\mathbf{S} = \frac{k_d^2}{1 + \delta}\begin{pmatrix}
-1 & 1 \\ \delta & -\delta
\end{pmatrix}
$$
where \(\delta = \frac{H_1}{H_2} \). The eigenvalues are 0 and \(-(1+\delta)\), and the latter value yields the first baroclinic deformation radius \(k_d\). The barotropic radius is simply \(\frac{\sqrt{gH}}{f_0}\), which is about 1600 km.
In the three-layer case (and above), the matrix is tridiagonal:
$$
}
$$
where \(g’_k = g\frac{\rho_{k+1}-\rho_{k}}{\rho_k}\).
The eigenvectors \(\vec{p}_n\) are the three vertical modes. We can project the streamfunction
m.p onto the modal bases to get the amplitudes of each mode:
pn = m.modal_projection(m.p)
Dividing these by \(U_1 L_d\) yields the normalized barotropic streamfunction and two baroclinic streamfunctions:
From here it is helpful to look at the energy spectra and the transfer of energy between different length scales. From the 2D kinetic energy spectrum provided in the diagnostics, the authors compute the radial (isotropic) kinetic and potential energy spectrum so that one can plot the quantities against a single wavenumber axis:
import pyqg.diagnostic_tools as tools ke_bt, ke_bc1, ke_bc2 = m.get_diagnostic('KEspec_modal') kr, modal_kespec_1 = tools.calc_ispec(m, ke_bt) _, modal_kespec_2 = tools.calc_ispec(m, ke_bc1) _, modal_kespec_3 = tools.calc_ispec(m, ke_bc2) pe_bc1, pe_bc2 = m.get_diagnostic('PEspec_modal') _, modal_pespec_2 = tools.calc_ispec(m, pe_bc1) _, modal_pespec_3 = tools.calc_ispec(m, pe_bc2)
And lastly we can compute the fluxes of energy at different scales:
l = ['APEgenspec', 'APEflux', 'KEflux', 'KEspec'] vals = [m.get_diagnostic(x) for x in l] APEgenspec, APEflux, KEflux = [tools.calc_ispec(m, v)[1] for v in vals[:-1]] _, KEspec = tools.calc_ispec(m, vals[-1][1]*m.M**2) ebud = [APEgenspec, APEflux, KEflux, -m.rek*(m.Hi[-1]/m.H)*KEspec] resid = -np.vstack(ebud).sum(axis=0) ebud.append(resid) ebud_labels = ['APE gen', 'APE flux div.', 'KE flux div.', 'Diss.', 'Resid.'] [plt.semilogx(kr, term) for term in ebud] plt.legend(ebud_labels, loc='upper right') plt.xlim([m.kk.min(), m.kk.max()]) plt.xlabel(r'k (m$^{-1}$)'); plt.grid() plt.title('Spectral Energy Transfers');
See Larichev & Held (1995) for a paper on baroclinic instability in a 2D model with similar energetic properties. APE is fluxed towards deformation length scales, where it is converted into KE. Because there is no beta effect, the KE cascades all the way up to the scale of the domain. The mechanical bottom drag removes the large scale KE.
Next we will model a fully barotropic flow, trying to reproduce the results of this James McWilliams paper. | https://www.gfdatabase.com/2018/11/pyqg-python-quasigeostrophic-model/3/ | CC-MAIN-2019-30 | refinedweb | 815 | 60.82 |
All of you must have heard the name of the planet Krypton. If you can’t remember the planet, don't worry. Planet Krypton is the origin of Superman, that means the mother planet where Superman was born. Superman was sent to earth by his parents when the planet was about to explode. Legends say that only few people of the planet survived from that explosion.
The inhabitants of the planet is called 'Kryptonians'. Kryptonians, though otherwise completely human, were superior both intellectually and physically to natives of Earth. One of the most common differences is their number system. The number system is denoted below
1) The base of the number system is unknown, but legends say that the base lies between 2 and 6.
2) Kryptinians don’t use a number where two adjacent digits are same. They simply ignore these numbers. So, 112 is not a valid number in Krypton.
3) Numbers should not contain leading zeroes. So, 012 is not a valid number.
4) For each number, there is a score. The score can be found by summing up the squares of differences of adjacent numbers. For example 1241 has the score of
(1 − 2)^2 + (2 − 4)^2 + (4 − 1)^2 = 1 + 4 + 9 = 14.
5) All the numbers they use are integers.
Now you are planning to research on their number system. So, you assume a base and a score. You have to find, how many numbers can make the score in that base.
Input
The first line of the input will contain an integer T (≤ 200), denoting the number of cases. Then T cases will follow.
Each case contains two integers denoting base (2 ≤ base ≤ 6) and score (1 ≤ score ≤ 10^9). Boththe integers will be given in decimal base.
Output
For each case print the case number and the result modulo 232. Check the samples for details. Both the case number and result should be reported in decimal base
Sample Input
2
6 1
5 5
Sample Output
Case 1: 9
Case 2: 80
问题链接:UVA11651 Krypton Number System
问题简述:(略)
问题分析:占个位置,不解释。
程序说明:(略)
题记:(略)
参考链接:(略)
AC的C++语言程序如下:
/* UVA11651 Krypton Number System */ #include <bits/stdc++.h> using namespace std; const int BASE = 6; const int N = BASE * (BASE - 1) * (BASE - 1); struct Matrix { unsigned int n, m, g[N][N]; Matrix(int _n, int _m) { n = _n; m = _m; memset(g, 0, sizeof(g)); } // 矩阵相乘 Matrix operator * (const Matrix& y) { Matrix z(n, y.m); for(unsigned int i=0; i<n; i++) for(unsigned int j=0; j<y.m; j++) for(unsigned int k=0; k<m; k++) z.g[i][j] += g[i][k] * y.g[k][j]; return z; } }; // 矩阵模幂 Matrix Matrix_Powmul(Matrix x, int m) { Matrix z(x.n, x.n); for(unsigned int i=0; i<x.n; i++) z.g[i][i] = 1; while(m) { if(m & 1) z = z * x; x = x * x; m >>= 1; } return z; } int main() { int t, base, score; scanf("%d", &t); for(int caseno=1; caseno<=t; caseno++) { scanf("%d%d", &base, &score); int n = base * (base - 1) * (base - 1); Matrix a(1, n); for(int i=1; i<base; i++) a.g[0][n - i] = 1; Matrix f(n, n); for(int i=base; i<n; i++) f.g[i][i - base] = 1; for(int i=0; i<base; i++) for(int j=0; j<base; j++) if(i != j) f.g[n - (i - j) * (i - j) * base + j][n - base + i] = 1; a = a * Matrix_Powmul(f, score); unsigned int ans = 0; for(int i=1; i<=base; i++) ans += a.g[0][n - i]; printf("Case %d: %u\n", caseno, ans); } return 0; } | https://blog.csdn.net/tigerisland45/article/details/79427364 | CC-MAIN-2018-26 | refinedweb | 619 | 75.1 |
c++ - Optimization bug in exception handling
- Christof Meerwald <cmeerw web.de> Dec 31 2002
#include <stdio.h> struct A { }; int main() { int *i = 0; try { i = new int(0); throw A(); } catch (const A &a) { printf("%d\n", *i); } } Compiled with optimizations enabled (-o+all), the compiler assumes that i is still NULL when the exception is caught: [...] xor ECX,ECX push dword ptr [ECX] push offset FLAT:_DATA call near ptr _printf [...] Extracted from omniORB (it only occurs when an omniORB application exits and omniORB tries to clean up...) bye, Christof -- JID: cmeerw jabber.at mailto cmeerw at web.de ...and what have you contributed to the Net?
Dec 31 2002 | http://www.digitalmars.com/d/archives/c++/1895.html | CC-MAIN-2014-10 | refinedweb | 112 | 76.93 |
I get this problem with both PyS60 1.4.5 and 1.9.7, in N80/6730c/N97. The problem is demonstrated by the following simple MMS sending code:
---clip---
import messaging
giftfile = u"e:\\data\\1.GIF"
messaging.mms_send(u"0442077628", u"gift", giftfile)
---clip---
If I omit the attachment, it works just fine. Succeeding to send a text-only MMS. The GIF is fine, sending it with the phone MMS editor works. It's 16 kB and I got the same problem with a 2 kB attachment..
Could this be that the MMSC is now more strict on the MMS encoding - and something like the Content-Type: for the attachment is missing/wrong? Often (but not with all attachments) if I take the failed message from the sending folder and move it to Drafts, open the message and send again, the phone's editor codes it properly and the resending works. Any ideas what could be wrong in PyS60 and how to fix this?
I'm using same MMS sending code that used to work. But now it starts sending a MMS and about 5 seconds later (after connecting to the operator's MMSC) the phone reports "cannot send message: 0442077628" and the details for the error message say (translated) "Multimedia message: unknown multimedia error".
I found the same problem with .gif, .GIF and .amr attachments (and earlier with vCard/vCal MMS attachments). The same problem happens in both DNA and Sonera networks. In Sonera network the same program used to work this spring. The N80 firmware has probably not updated for ages and 1.4.5 I was using already then so I think it's really the same setup that used to work and now fails. This is nothing sudden or temporary, the problem has now persisted for at least few weeks.
Pertti | http://developer.nokia.com/community/discussion/showthread.php/181556-MMS-attachment-sending-fails-the-same-code-used-to-work | CC-MAIN-2014-49 | refinedweb | 307 | 74.08 |
I have always been fascinated by how chat bots work!
With the trend of the telegram phone app being popular among students, Chat Bots have become a new medium for application interaction.
From digitizing social games like "werewolf" to informational services like getting the latest news update, chat bots have made its way to take those roles, and it has become a source of leisure and information for frequent telegram users.
So here I am, sharing that learning journey on creating my very first Telegram Bot.
Requirements on creating a telegram chat bot:
Python knowledge
Django knowledge
Simple NGINX service knowledge
Required resources:
Django framework
A VPS cloud
Telegram API Key
Python
Python telepot library
I have used the following git project by
Adil Khashtamov to setup the telegram bot. The codes are available at
Step 1 (obtain telegram bot secret key)
To obtain the bot secret key, you will have to use your existing telegram account to start a conversation with
@BotFather. This bot will walk you through on creating a new bot. Start by typing
/start followed by
/newbot. BotFather will ask for a name for your bot and a username for your bot. For example, you can name your bot as
MyFirstApp, and you can set the username as
MyFirstAppBot. Note down the username of your bot, as you will only be able to search your bot in telegram with the username, in which with reference to the example
@MyFirstAppBot.
Step 2 (Understanding telepot, and your first interaction)
First, install telepot.
pip install telepot
Or simply google the correct method for the version of python in which you are using. Once telepot is installed, ensure that your teelgram bot secret key from Step 1 is working. You can do this doing a basic information grabbing method.
import telepot token = 'your telegram secret key' TelegramBot = telepot.Bot(token) print TelegramBot.getMe()
Once done you should get a response with your Bot name and bot username with an ID.
Next, open up your telegram app and start interacting with your Bot. Search your bot by adding a
@ followed by the bot username. Once you have a dialog, send a string the the bot. Let's use
/start as an example.
Once you have sent a message, alter your python script by adding the following line
TelegramBot.getUpdates().
You should get an update on what messages are being sent out with various ID information such as update ID and chat ID. This ID uniquely identifies the chat. Thus, it is important to know how chats are being identified for future debugging purpose
Time to ignore the codes above, and code some real bot features.
Step 3 (NGINX, Django, and all the Environment Variables)
According to google, "NGINX is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache." In Short, NGINX is a web server SERVICE which allows application to be hosted. Our telegram bot functions will be hosted on a server running NGINX. You may use your own VPS Cloud server to run this, or spin up a local VM on your machine bridged to the local network which you are in, along with Network Address Translation rules to ensure traffic from the internet going to your public IP will reach your VM. For simplicity, I have used a server from Digital Ocean, a paid VPS which allows me to host my telegram bot functions.
Refer to
Adil Khashtamov site on configuring and setting the server up:
Step 4 (Getting it running, inserting your code)
If you can get Step 3 running (using Adil's codes), you are one step closer on creating your own bot! (Successfully implementing step 3 means you are able to run this telegram bot functions from your bot. If you are able to achieve this, you are almost there. All that is left right now is replacing his logics with yours and call them!)
Assuming your codes are running right in Step 3, all you need to do is change the codes at
view.py. This is the file in which functions will be inserted.
To understand how the code works, the important information required for the 'reply' function for the bot to reply the use is this following piece of code in
view.py:
.. .. chat_id = payload['message']['chat']['id'] cmd = payload['message'].get('text') .. ..
Therefore, to visualize the underlying mechanism of how the bot replies, it is something like this:
1. User starts conversation 2. Bot recieves various information including chat_id and message 3. Bot uses the chat_id to identify which 'chat' to reply to 4. Bot sends whatever that is required.
So assuming you just want your bot to reply "Hello World", all you had to add was
TelegramBot.sendMessage(chat_id, 'Hello World') right after the bot receives the chat_id and sent message.
To summarize the "reply" portion, it is important to place the
TelegramBot.sendMessage(chat_id, 'Reply message here') code right after your functions. For instance, you want to pull data off certain API. You will first need to pull the data before placing the code. Take that piece of code as a 'return' function for a telegram bot.
Lastly, once your code is ready, configure
gunicorn and
supervisord, according to the link in:
Once your services are up, test your bot!
*NOTE: if any changes were made to the codes, you are required to restart the service by using
supervisord. | https://www.nullsession.pw/a-telegram-love-story/ | CC-MAIN-2021-43 | refinedweb | 909 | 71.34 |
So I was talking with a friend online the other day the the subject of transposition ciphers came up and I ended up whipping this together:
It's not very pretty, but it works. I still learning the finer points of c++.It's not very pretty, but it works. I still learning the finer points of c++.Code:
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
#include <iostream>
#include <cstdlib>
using namespace std;
int main(int argc, char *argv[])
{
char plaintext1[255];
char plaintext2[255];
char ciphertext[255];
int caesarkey = 0;
int stringlength = 0;
int i = 0;
cout<<"Next we are going to try a flip cipher, this is a little more complicated, so bear with me."<<endl;
cout<<endl<<"First off, enter a new plaintext: ";
cin.getline(plaintext2, 255, '\n');
while(plaintext2[i] != '\0'){i++;stringlength++;}
char flipcipher[stringlength+1];
int permuter[stringlength];
cout<<"Now we know that our ciphertext is "<<stringlength<<" Characters long. We are now going to move the characters around a little, for that we need to enter a key for them."<<endl;
cout<<"Please enter the numbers 0 - "<<stringlength-1<<" In a random order! Don't repeat yourself!"<<endl;
i = 0;
while(i < stringlength){
int n = 0;
int u = 0;
cout<<"Enter number: ";
cin>>n;
if(n>stringlength){
cout<<"Number too high! Try again!";
cin>>n;
i--;}
permuter[i] = n;
i++;}
i = 0;
cout<<"Ok now we have a permutor, let put it to use!"<<endl;
while(plaintext2[i] != '\0'){
flipcipher[i] = plaintext2[permuter[i]];
i++;}
flipcipher[i+1] = '\0';
i = 0;
cout<<"Ok now we have flipped the plaintext, let's look at it: "<<endl;
while(flipcipher[i] !='\0'){
cout<<flipcipher[i];
i++;}
cout<<endl<<"OK! Now let us see if we can decode the text as well!"<<endl<<endl;
i = 0;
while(flipcipher[i] !='\0'){
plaintext1[permuter[i]] = flipcipher[i];
i++;}
cout<<"If what comes next is your real text then we've won!"<<endl;
i = 0 ;
while(plaintext1[i] !='\0'){
cout<<plaintext1[i];
i++;}
cout<<endl;
return 0;
}
My question is this: As you can see my method for generating a "permuter" (I suppose the correct term is a key) is far from elegant, I tried to incorporate a check for duplicate numbers but it only worked some of the time, and even the length check is sort of dodge, so I am looking for a method to generate a key in a way that is half-way random. I can't just generate a string of random numbers since the cipher requires it to be a permutation of the number string from 0 to stringlength, so I was thinking is there any relatively simple way to generate an arbitrary permutation of said string?
Also any general comments on the code are welcome, mostly it was written to show the principle of a transposition cipher and for me to learn a bit more C++, so any comments are welcome! :) | http://cboard.cprogramming.com/cplusplus-programming/93940-noob-question-about-transposition-cipher-printable-thread.html | CC-MAIN-2015-27 | refinedweb | 486 | 70.02 |
lxc-usernsexec (1) - Linux Man Pages
lxc-usernsexec: Run a task as root in a new user namespace.
NAMElxc-usernsexec - Run a task as root in a new user namespace.
SYNOPSISlxc-usernsexec [-m uid-map] {-- command}
DESCRIPTIONlxc-usernsexec can be used to run a task as root in a new user namespace.
OPTIONS
- -m uid-map
- The uid map to use in the user namespace. Each map consists of four colon-separate values. First a character 'u', 'g' or 'b' to specify whether this map pertains to user ids, group ids, or both; next the first userid in the user namespace; next the first userid as seen on the host; and finally the number of ids to be mapped.
More.
EXAMPLESTo spawn a shell with the full allotted subuids mapped into the container, use.
AUTHORSerge Hallyn <serge.hallyn [at] ubuntu.com> | https://www.systutorials.com/docs/linux/man/1-lxc-usernsexec/ | CC-MAIN-2021-04 | refinedweb | 141 | 65.12 |
InfoPath uses a declarative, per-form, event-driven approach to programming customized forms. That is, code consists of declarations that define which event handlers are to be invoked when form elements or data elements source events. Code in InfoPath is always written behind a specific form template; it is not possible to write "application-level" code that is executed for all form templates. Code runs when events are raised that have been declaratively handled by event handlers.
There are two "root" objects in the InfoPath object model. The Application object is the root of the runtime object model; every programmable object in InfoPath can be accessed through the Application object. The other "root" object is the ExternalApplication object. The ExternalApplication object is useful for automating InfoPath by an automation executable rather than from code behind a form, as shown in Listing 12-1. However, this chapter only discusses how to create code behind a form and does not cover automation executables further.
When you create an InfoPath form template project in VSTO, Visual Studio automatically generates a FormCode.cs file for you to add the code behind the form. It generates some "boilerplate" code for you to get started containing methods called when the InfoPath form starts up and shuts down, as shown in Listing 12-2.
Listing 12-2. The FormCode.cs File
namespace PurchaseOrder { //[Attribute omitted] public class PurchaseOrder { private XDocument thisXDocument; private Application thisApplication; public void _Startup(Application app, XDocument doc) { thisXDocument = doc; thisApplication = app; } public void _Shutdown() { } } }
When the InfoPath form starts up, InfoPath calls the _Startup method and passes in an Application and XDocument object. By default, the managed class that represents the InfoPath form stashes away references to these objects in thisApplication and thisXDocument so that your event handlers and other code can use them later. The same Application object is passed to all currently executing forms. The XDocument object is a specific instance that refers to the form to which it is passed.
Event-Based Programming
While filling out the form, various user actions directly or indirectly trigger events. Take the OnLoad event, for example. To handle (that is, register an event handler to be called when the event occurs) the OnLoad event, select the InfoPath designer's Tools menu, then the Programming submenu, and then the On Load Event menu item. Notice that the InfoPath designer automatically creates a code stub and handles the event. Whenever you add an event handler to an InfoPath form, you always do it using the InfoPath designer window and its associated menus, never by using any commands within Visual Studio.
[InfoPathEventHandler(EventType=InfoPathEventType.OnLoad)] public void OnLoad(DocReturnEvent e) { // Write your code here }
You will immediately notice that an InfoPath event is not hooked up in the traditional .NET way of creating a new delegate and adding that delegate to an object that raises the event using the += operator. Instead, InfoPath events are hooked up via attributesthe InfoPath runtime reflects on the attributing of methods in your code to determine events that are handled by your code and the methods to call when an event is raised. In this case, the attribute InfoPathEventHandler is added to your OnLoad event handler. This attribute is constructed with EventType=InfoPathEventType.OnLoad, which tells the InfoPath runtime to raise the OnLoad event on this attributed method.
Let's add some code to our OnLoad handler to restrict users from creating a new form if it is not presently business hours. (Note that this does not restrict editing existing forms, just creating new ones.) Listing 12-3 shows the new OnLoad handler.
Listing 12-3. On OnLoad Handler That Restricts Creation of New Forms to Be During Business Hours
[InfoPathEventHandler(EventType=InfoPathEventType.OnLoad)] public void OnLoad(DocReturnEvent e) { if ((DateTime.Now.Hour < 8 // earlier than 8am || DateTime.Now.Hour > 17 // later than 5pm || DateTime.Today.DayOfWeek == DayOfWeek.Saturday || DateTime.Today.DayOfWeek == DayOfWeek.Sunday) && thisXDocument.IsNew) // is a new form { thisXDocument.UI.Alert("You can only create a new" + " mortgage application 8am to 5pm, Monday through Friday."); e.ReturnStatus = false; // fail loading the form } }
All form events in InfoPath are cancelable through code. In this OnLoad event example, setting the ReturnStatus property to false on the DocReturnEvent object e tells InfoPath to fail the OnLoad event (and thus fail loading the form) when the event handler has returned. The default value is true.
Previewing
Press F5 or choose Start from the Debug menu in Visual Studio and the code in Listing 12-3 will be compiled and start running in InfoPath's preview form mode. Depending on what time and day you run the code in Listing 12-3, you may or may not be able to fill out the form!
Suppose you are working latelater than 5 p.m. at least. The OnLoad handler will not allow you to create a new form because thisXDocument.IsNew always returns true when you press F5 or choose Start from the debug menu. How can you force the form to look like an already created form? If you double-click the template.xml file (located in the Visual Studio project folder), you will start InfoPath and cause InfoPath to think it is opening an already created form. The template.xml file is used internally by InfoPath when creating a new form after you double-click the .XSN form template. However, directly opening this file tricks InfoPath into thinking it is an existing or previously saved form.
Previewing is a very useful technique when designing and debugging a form, but it is important to realize that previewing a form causes the following side effects:
So in addition to previewing, you should also use your form in a production environment with InfoPath running by itself to verify that everything works properly. | https://flylib.com/books/en/2.53.1/programming_infopath.html | CC-MAIN-2021-39 | refinedweb | 965 | 53.21 |
Using Struct Pie C Shared Libraries to Develop a Priority Queue Scheduler
Suppose you have a list of scripts or tasks that you want your computer to do in a certain order based on the priority of each task. You have a list so big that you don’t have the time to sort. Here comes the importance of the priority queue data structure.
A priority queue is a data structure where elements are served based on their priorities, not on their insertion order like normal queues. The most relative example to a priority queue, is hospitals’ emergency where the doctors check the patients according to their medical status not to when they came to the hospital.
Back to our example, the task scheduler, we will use the priority queue from Struct Pie open source data structures shared libraries in our C program.
What is Struct Pie?
Struct Pie is a free open source project under MIT license. The project is both a python library and a set of C shared libraries.
For a quick start on python library, you can have a look at
The C shared libraries project is available on sourceforge.net and contains the data structures as C shared libraries that can readily be used in C projects. Download the libraries from here.
The Tasks
As we mentioned, we are aiming at creating a program that takes a list of tasks along with their priorities and executes them based on the priorities. Suppose we have 3 scripts in 3 different languages that we want to run on a certain file. In our case, the task is doing some calculations on a list of numbers from 1 to 10 saved in a file called “numbers.txt”.
The 3 scripts that we have are:
1. python script: “pysum.py”
import sys file = sys.argv[1] with open(file, "r") as f: numbers = f.read().splitlines() numbers = [int(i) for i in numbers] print("\n\tSum of number list in python is: %d" % sum(numbers))
2. R script: “rmean.R”
args <- commandArgs(trailingOnly = TRUE) file <- args[1] numbers <- as.numeric(read.table(file, header = FALSE, stringsAsFactors = FALSE)[, 1]) print(paste0("\n\t", "Average of number list in R is: ", mean(numbers), "\n"))
3. C script: “cprint.c”
#include <stdio.h> #include <stdlib.h> void print_array(int * array, size_t len) { puts("\n\tPrinting list in C"); printf("\n\tLength of list is: %i\n", len); puts("\tList elements are: "); int i; for (i = 0; i < len; i++) printf("\t%i", array[i]); printf("\n"); } void main(int argc, char * argv[]) { FILE * input = fopen(argv[1], "r"); int number, val; int iter = 0; int b = 10; // initial capacity int * number_array = (int * ) malloc(b * sizeof(int)); while (fscanf(input, "%i", & number) == 1) { if (iter >= b) { b = iter + 1; // new capacity to reallocate number_array = (int * ) realloc(number_array, sizeof(int) * b); } number_array[iter] = number; iter++; } fclose(input); print_array(number_array, b); }
Each script does a different simple task. Now let’s have a look at the list which has them and their priorities: “tasks.txt”
cprint,2 pysum.py,4 rmean.R,1
N.B. we are having the compiled executable from the cprint.c script not the script itself.
gcc cprint.c -o cprint
Struct Pie Priority Queue Library
Now, as we have an idea about the task let’s get into developing the solution. We will first need to download Struct Pie source code from the SourceForge link shared above. Then we build the shared library for the “priority_queue” with the help of the “Makefile” in the directory and make it available for our program to include.
mkdir -p ./StructPie/priority_queue/libs) gcc -fPIC -c ./StructPie/priority_queue/pq.c -o ./StructPie/priority_queue/pq.o gcc -shared ./StructPie/priority_queue/pq.o -o ./StructPie/priority_queue/libs/libpq.so export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./StructPie/priority_queue/libs
Last link and the .so extension is specific for Linux.
The Scheduler
Here we will not only look on how to simply use the priority_queue library. But also we will focus on some great C features, like piping and interprocess communication along with redirecting file descriptor and function pointers.
Let’s first include the needed libraries. Include all the following code in “scheduler.c” file.
N.B. the comments in the code are so important as they have deeper explanation.
#include <stdio.h> #include <stdlib.h> #include <errno.h> #include <unistd.h> #include <string.h> #include "./StructPie/priority_queue/pq.h"
Function Pointer
Sometimes you like to run a function from inside another function. This makes the wrapper function more dynamic as it can accept other functions as arguments and executes them within it. Function pointers allow this in C.
In our case, we have scripts in different languages. That means different commands to be executed. So to dynamically execute them, we need some helper functions that assign the suitable values based on the task at hand.
// all definitions before main function could be wrapped into a header file if needed /* helper functions to get the commands in shape based on their type */ void is_python(char * script, char * lang) { if (strstr(script, ".py")) { strcpy(lang, "python3"); } } void is_r(char * script, char * lang) { if (strstr(script, ".R")) { strcpy(lang, "Rscript"); } } void is_exe(char * script, char * lang) { int i, count = 0; /* count how many dots * the one in './' and whether it has the dot of extension */ for (i = 0; script[i] != '\0'; i++) { if (script[i] == '.') count++; } if (count == 1) strcpy(lang, "exe"); }
I believe by now, you got the intention. Instead of having a function that checks the input and accordingly changes “lang” value, We have smaller functions, one for each language, and then we will wrap all of them inside a function that will call them. The advantage of this approach is that we will not need change the wrapper function whenever we have a new task in a new language. Instead, we will only define a small function for it and pass the function to the wrapper function.
Now let’s look at how the wrapper function that takes a function pointer will look like
// define a function that takes a function as its input (function pointer) void assign_language(void( * assign)(char * , char * ), char * process, char * language) { assign(process, language); }
“Return type (* pointer variable_name)(Args types)” is how you create a pointer to a function.
As you can see, the function only calls the smaller ones and accordingly updates the “language” variable for the command later.
Main Function
As all C programs, we will define a main function and we will explain the code inside it part by part
int main() { // linked queue from pq library struct linked_queue * front = NULL, * rear = NULL; int queue_size = 0, priority; // pipe to pipe parent and child processes int pipefds[2]; // process id pid_t pid; // temp process string char proc[STRLEN]; /* output file for the child to output result to we are defining it here so that the child appends to it not to overwrite */ FILE * out; out = fopen("output.txt", "w"); // read the tasks from the stdin which will be redirected upon starting while (scanf("%[^,],%d\n", proc, & priority) == 2) { // pushing tasks and priorities to the queue enqueue( & front, & rear, proc, priority); queue_size++; } puts("The queue now looks like:\n"); display(front, rear); puts("\nNow executing the tasks based on priorities\n"); // start piping if (pipe(pipefds) == -1) { printf("Unable to create pipe\n"); exit(0); }
N.B. the last “}” will be in the last block of code, as we will take the code step by step.
Again! The comments are very important. As you see, the code is now so explainable so no need to repeat. But worth noting how we open a pipe between the processes using the pipe() function and the int pipefds[2].
Inter-Process Communication
We are now the most critical part. We will have two processes doing the job. A parent process which will dequeue the queue and forms the variables for the command to be executed and pass it through the pipe. The other process is the child process which reads the message from the pipe and executes the commands outputing the results in the output.txt file.
Let’s look at how we can accomplish that. We first fork() the running process
pid = fork();
Now we will have two processes with two pids. Parent with pid > 0 and a child with pid == 0. So usually you will see the code structured in that way
int pid = fork(); if(pid == -1){ puts("forking failed"); exit(0); } else if(pid == 0){ // child's stuff to do } else { // for pid > 0 i.e. parent // parent's stuff to do }
Upon forking, all that we defined and declared above is shared between the two processes while inside each if (pid == x) block, variables and code will be execlusive to the process. That’s why we are using pipe to send variables over between the parent and child process.
Let’s see what the child is supposed to do
// looping until queue is empty while (queue_size > 0) { if (pid == -1) { // error handling printf("\n\t Can't fork process: %s\n", strerror(errno)); return 1; } // pid == 0 // child process else if (pid == 0) { // close the writing end of the pipe as it is not needed here close(pipefds[1]); // define child related variables and the input file on which the calcs will be done char message[STRLEN], message2[STRLEN], cmd[STRLEN * 3], input[STRLEN] = "./numbers.txt"; /* redirect the standard output to a file so that the output from parent and child don't overlap */ dup2(fileno(out), STDOUT_FILENO); // read messages from parent // it is better to while loop until EOF if reading multiple messages read(pipefds[0], message, sizeof(message)); read(pipefds[0], message2, sizeof(message2)); printf("Process %s to be processed using %s ...\n", message, message2); // construct the command to be executed based on the type of the script if (strstr(message2, "exe")) { sprintf(cmd, "%s %s", message, input); } else { sprintf(cmd, "%s %s %s", message2, message, input); } system(cmd); puts("");
The child will read from the read end of the pipe the task name and the language, forms the command string and calls it. The output of the child process is redirected into “output.txt” file using dup2(fileno(out), STDOUT_FILENO).
N.B. It is very important to always handle errors!
Now the parent process
} else { // pid > 0 // parent process // close the reading end of the pipe as it will not be needed close(pipefds[0]); // do the work in the parent char process[STRLEN], language[STRLEN]; strcpy(proc, dequeue( & front, & rear)); // gets name of process strcpy(process, "./"); strcat(process, proc); // concats everything in arg1 // function pointer to finalize how the script should look like assign_language(is_python, process, language); assign_language(is_r, process, language); assign_language(is_exe, process, language); // send the messages to the child write(pipefds[1], process, sizeof(process)); write(pipefds[1], language, sizeof(language)); printf("Process %s to be processed using %s ...\n\n", process, language); // N.B. no need to use waitpid() as the child process doesn't exit } queue_size--; } return 0; }
The parent process dequeues the queue and shapes the process name and the language needed, then writes it into the pipe for the child to receive. Note that the output of the parent is the same open terminal.
Now it is time to compile and test
Compile and Run
We need to compile the scheduler program with the pq library.
gcc scheduler.c -L./StructPie/priority_queue/libs -lpq -o scheduler
Now we can run the program with the tasks file
./scheduler < tasks.txt The queue now looks like: ( {pysum.py, 4} {cprint, 2} {rmean.R, 1} ) Now executing the tasks based on priorities (pysum.py) with priority (4) has been removed Process ./pysum.py to be processed using python3 ... (cprint) with priority (2) has been removed Process ./cprint to be processed using exe ... (rmean.R) with priority (1) has been removed Process ./rmean.R to be processed using Rscript ...
Remember, this is the parent’s output only. To check that the child made its work, we look at “output.txt” file:
cat output.txt Process ./pysum.py to be processed using python3 ... Sum of number list in python is: 55 Process ./cprint to be processed using exe ... Printing list in C Length of list is: 10 List elements are: 1 2 3 4 5 6 7 8 9 10 Process ./rmean.R to be processed using Rscript ... [1] "\n\tAverage of number list in R is: 5.5\n"
Perfect! Our scheduler is working like a charm. | https://minimatech.org/task-scheduler-in-c-using-struct-pie/ | CC-MAIN-2021-31 | refinedweb | 2,091 | 71.44 |
In this article series:
- Customizing the ribbon – creating tabs, groups and controls
- Adding ribbon items into existing tabs/groups
- Ribbon customizations - dropdown controls, Client Object Model and JavaScript Page Components
- Customize the ribbon programmatically from web parts and field controls (this post)
In contrast to my earlier posts, this article isn’t a detailed walkthrough – rather, it is a compendium of techniques and information which should help you implement ribbon customizations for web parts, custom field controls or indeed, any other kind of page control. At the time of writing I haven’t really seen a great deal written on this, aside from a couple of resources I’ll mention. I would have loved to have written detailed walkthroughs here, but alas I can’t because a) Microsoft have recently published some good info for one of the scenarios, so I’d prefer to point you to that, b) To run through all the scenarios here in depth would take weeks and c) Because if I don’t stop writing about the ribbon soon I’ll still be on this topic this time next year, and frankly I have a book chapter to write on a completely different area of SharePoint 2010 which I need to crack on with! So we’ll look at each requirement and I’ll discuss what I think are the key techniques you’d use along with some observations from me.
Note that due to the general lack of documentation so far on this topic (and the fact I haven’t bottomed everything out in all the scenarios), a couple of things here are speculation rather than hard fact. I’ll make these items clear, and will endeavour to come back to this article and add updates as new information emerges.
Before we dive in, remember that if you’re abiding by ribbon design principles you should most likely be working with a ContextualGroup (discussed towards the end of Adding ribbon items into existing tabs/groups) – this is the container to use for ribbon elements which are only relevant depending on what the user is doing (shown visible but not active here):
If this isn’t what you want, note that things are probably easier if you just need to add some controls on an existing tab or group which get activated under certain circumstances. In this case you can just supply some JavaScript to the ‘EnabledScript’ attribute of your controls – I showed this in my Notifications/Status demo in Customizing the ribbon (part 1) – creating tabs, groups and controls. The rest of this article focuses on how you might get a contextual group (see image above) to show in different scenarios.
Adding ribbon items from a web part
In SharePoint 2010 the web part framework now has special provision for ribbon customizations, which means a couple of things are taken care of for you. Microsoft have now published some guidance on this scenario in the form of an article on the SharePoint Developer Documentation blog - How to Create a Web Part with a Contextual Tab. There’s a lot of golden info in there, but I’ll distil the main points here:
- Using server-side code, the ‘RegisterDataExtension’ method of SPRibbon can be used to pass XML to the ribbon framework.
- This is an alternative to the fully declarative approach using the CustomAction element. As far as I can tell, either technique can be used for ribbon customizations specific to a certain control only (as opposed to ribbon customizations for say, all lists of a specific type where CustomAction is the way to go).
- Your web part needs to implement the new IWebPartPageComponentProvider interface and it’s WebPartContextualInfo property
- This allows you to specify information about the associated ribbon customization(s) and the ID of the associated page component (which we discussed last time) for your custom bits. This allows the ribbon framework to ‘link’ your web part with your ribbon changes – meaning that certain things are taken care of for you e.g. firing commands only when your web part has focus on the page (if you specified this behaviour by use of ‘getFocusedCommands’ in your page component). Without this you would have to write JavaScript code to manually handle page events for controls emitted by your web part e.g. the click event for a textbox.
Adding ribbon items from a field control
Like web parts, I think (speculating here) there is special provision in the framework for ribbon customizations from a field control. Consider that, like web parts, field controls should typically only show their contextual ribbon options when they have focus - since this would be the case for all field controls, it makes sense to me that Microsoft might abstract some handling around this for us. If not, you would be responsible for writing JavaScript to detect when the user clicked into your field so that you could show your ContextualGroup.
Digging around, I notice that all SharePoint field controls have 4 new properties (since they are implemented on one of the base classes, Microsoft.SharePoint.WebControls.FormComponent):
- RibbonTabCommand
- RibbonContextualGroupCommand
- RibbonGroupCommand
- RibbonCommand
I’m surmising that these control properties would be set declaratively in the hosting page and are designed to match up with commands specified in an accompanying page component - this would enable you to run client-side code similar to my sample last time, perhaps to initialize data for your ribbon controls and so on. However, when I try to implement these commands, my page component’s ‘handleCommand’ method never receives these commands. So either I’m doing something wrong or this theory is incorrect. In which case, not to worry ribbon customizations for field controls should still be entirely possible, there will just be more work to do. Read on.
Using server-side code to show ribbon items
Thinking outside of any ‘framework support’ for where our ribbon customizations are targeted at, we can always write server-side or client-side code to show a contextual group. In fact, I already showed the server-side code for this in Adding ribbon items into existing tabs/groups (ribbon customization part 2):);
}
This is probably only appropriate if your stuff is contextual-ish – an application page would be a good example of this. In this example all we care about is that the user is on our page, then we can show our options. It doesn’t matter which control has focus, effectively our options are OK to show by default when the page loads. However, if you need page-level ‘contextuality’ (I may have just invented that word by the way - use it in a sentence to your boss today) then most likely you’ll be wanting to use JavaScript to show your contextual group when the user is doing something specific on the page e.g. editing a certain field. You’d then be looking to some client-side code to detect this and respond accordingly.
Using client-side code to show ribbon items
So, final scenario - what if you need to show ribbon items from something which isn’t a web part or field control (or like me you couldn’t get there with anything in the field control framework which may or may not be designed to help), and it has to be done client-side to be fully contextual? Well, disappointingly I’m drawing another blank here so far – I’d love to hear from anyone who knows the answer to this. In case you’re doing ribbon development and are interested, here’s a summary of my journey:
- Checked SP.Ribbon namespace in Client OM
- Spent many happy hours in the debugger and various out-of-the-box JavaScript files, looking for examples of where the core product does this (e.g. calendar, rich text editor to name a couple)
- Found some interesting methods on the CUI.Ribbon object, such as showContextualGroup(‘foo’) and selectTabByCommand(‘bar’)
- Noted that across the entire SharePoint root, these methods are only called by the RTE, not globally across the SharePoint codebase
- Noted that the RTE code gets a CUI.Ribbon instance from a property (RTE.RichTextEditorComponent.$3b() – N.B. most of the JS is obfuscated or machine-generated* down here)
- Tried to use SP.Ribbon.PageManager.get_instance().get_ribbon() (used elsewhere in the OOTB codebase) to get me a CUI.Ribbon instance, however this gave me a null
- Tried to use the ‘_ribbon’ page level variable, however this appears not to be of type CUI.Ribbon as the debugger shows it does not have the methods I’m trying to call
- Tried a couple of other things which I’ve forgotten now
Needless to say, I’d love to hear what I’m missing on this. If nothing else, hopefully MS will release some more information soon which will shed some light on how to handle this scenario.
Summary
This post doesn’t claim to have all the answers, but it might serve as a “leg up" if you’re trying to build any of these scenarios now. I’m hoping that the lack of deep information in this area is a reflection on the fact that RTM is still some time away, and that ribbon dev will get some love in the SDK between now and then. The key scenarios I discussed here are displaying custom ribbon elements from web parts and field controls, but also the more generic cases of displaying customizations with server or client-side code.
Feel free to leave a comment if you can plug some of the gaps in my coverage here.
8 comments:
Hi Chris,
Is there any way which I can bring my custom ribbon to the home page of a site?
Is there any registration type available? Here I will not be able to add the MakeTabAvailable cs code in home page.
Regards,
Elizabeth
@Elizabeth,
Since the registration types are oriented around list types/content types, I'm 99% sure the only way to target a specific page in this way would be to add a web part or otherwise run some code which will make the tab available.
HTH,
Chris.
Hi there,
I am trying to design a ContextualGroup with two Tabs. The first Tab seems to be working fine, unfortunatelly I can't display the contents of the second tab. It gives me a JavaScript error saying: A template with name: undefined could not be loaded.
Is this referring to the GroupTemplate Layout? I tried multiple GroupTemplates, even some from CMDUI.xml, but still doesn't work :(
Can you give some advice?
@Anonymous,
Not sure I've tried multiple ContextualGroups, but if you post your XML somewhere I can take a look and try to advise.
Cheers,
Chris.
Unfortunatly, eventhough I can add Ribbon buttons using the Elements.xml. I noticed that the SPRibbon class is not available in Sandboxed solutions. I only need to rename a button text so think I'm going for the dirty JavaScript JQuery Solution :-)
@Rainier,
Yes, quite true. In you case, could you not use XML to remove the original button, add a new one in it's place with the same action and new text?
Guaranteed safe in the case of service packs/patches :)
Cheers,
Chris.
Chris,
I have been trying to add a tab and button in pure javascript/jquery and can get a tab and group the right way, but not the rest. I feel like it should work but am missing something and would really appreciate if you have any time to look at this blog post for the "hack" I used to make it work so far.
blog post
I hope this only appears once as I had issues with the captcha!
@Daniel,
Read your post - interesting stuff you're doing there :) Sorry, I haven't dug deep here, so don't have anything helpful to add I'm afraid.
Good luck!
C. | https://www.sharepointnutsandbolts.com/2010/02/customize-ribbon-programmatically-from.html | CC-MAIN-2020-34 | refinedweb | 1,987 | 52.43 |
I read a book, Writing Idiomatic Python Although I usually write python codes, I have not paid attension to the style of these codes. By reading this book, I have noticed that there are pythonic style in python codes. And I think it was good mind to write python code. There were many Halmful, Idiomaric phrases about python code. So I’d like to introduce some of them which I’ll write in my own code.
And of course, all python developers should read this book!!
Usually, I write loop code like below.
index = 0 for element in ["Takeshi", "Nobita", "Masao"]: print('{}:{}'.format(index, element)) index += 1
But it is harmful according to this book. Correctly, you should write like below.
conteiner = ["Takeshi", "Nobita", "Masao"] for index, element in enumerate(conteiner): print('{}:{}'.format(index, element)
In python, you can write arbitrary arguments with
*args or
**kwargs. Arbutrary arguments are useful when you
want to implement some types of API which is different by package versions. You can write like below.
def make_api_call(a, b, c, *args, **kwargs): print a print b print c print args print kwargs
Run this
#!/usr/bin/python if __name__ == "__main__": make_api_call(1, 2, 3, 4, 5, 6, name="Takeshi", age=23) # --console-- # 1 # 2 # 3 # (4, 5, 6) # {'age': 23, 'name': 'Takeshi'} #
In python,
exception is common phrases used in
for loop or etc. In addition to this,
exception gives you a useful information for debugging. So you should not swallow these exceptions
by writing bare
except clause. If you don’t have any idea about what type exceptions are raised from
third-party library, you should raise it again.
import requests def get_json_response(url): try: r = requests.get(url) return r.json() except: raise
Use tuple.
foo = "FOO" bar = "BAR" (foo, bar) = (bar, foo)
joinmethod. It’s more faster
result_list = ["Takeshi", "Nobita", "Masuo"] reesult_string = " ".join(result_list)
# user is a dictionary def get_formatted_user_info(user): output = 'Nama: {user.name}, Age: {user.age}, Sex: {user.sex}'.format(user=user)
xrangeto
range
Use
xrange
for index in xrange(10000): print('index: {}'.format(index)
If there are
name field in user,
get returns
'Unknown'.
username = user.get('name', 'Unknown')
The list complehension is well known about python context. But dictionary complehension is as important as this.
user_email = {user.name: user.email for user in users_list if user.email}
In set syntax, you can use complehension expression.
users_first_names = {user.first_name for user in users}
If there are any data which is not necessary for you in tuple, ignore it with
_
(name, age, _, _) = get_user_info(user) if age > 20: output = '{name} can drink!'.format(name=name)
Python list comprehension is very useful, however, processing very large list will run out of memory.
In this case you should use
generator which is iterative expression, but doesn’t use memory.
users = ["Nobita", "Takeshi", "Masuo"] for i in (user.upper() for user in users if user != "Takeshi"): print(i) # NOBITA # MASUO
Python has standard set of formatting rule officially. It is called PEP8. You should install this plugin your editor.
PEP257 is the set of rules of document formattings.
def calculate_statistics(value_list): """Return a tuple containing the mean, median and the mode of a list of integers Arguments: value_list -- a list of integer values """
I am a pythonia. With reading this book, I am able to write more pythonic code at my work scene.Written on January 21st , 2014 by Kai Sasaki | https://www.lewuathe.com/python/idiomatic/idiomatic-mython.html | CC-MAIN-2018-26 | refinedweb | 570 | 68.16 |
A Pandas dataframe is a two dimensional data structure which allows you to store data in rows and columns. It's very useful when you're analyzing data.
When you have a list of data records in a dataframe, you may need to drop a specific list of rows depending on the needs of your model and your goals when studying your analytics.
In this tutorial, you'll learn how to drop a list of rows from a Pandas dataframe.
To learn how to drop columns, you can read here about How to Drop Columns in Pandas.
How to Drop a Row or Column in a Pandas Dataframe
To drop a row or column in a dataframe, you need to use the
drop() method available in the dataframe. You can read more about the
drop() method in the docs here.
Dataframe Axis
- Rows are denoted using
axis=0
- Columns are denoted using
axis=1
Dataframe Labels
- Rows are labelled using the index number starting with 0, by default.
- Columns are labelled using names.
Drop() Method Parameters
index- the list of rows to be deleted
axis=0- Marks the rows in the dataframe to be deleted
inplace=True- Performs the drop operation in the same dataframe, rather than creating a new dataframe object during the delete operation.
Sample Pandas DataFrame
Our sample dataframe contains the columns product_name, Unit_Price, No_Of_Units, Available_Quantity, and Available_Since_Date columns. It also has rows with NaN values which are used to denote missing values.
import pandas as pd data = {"product_name":["Keyboard","Mouse", "Monitor", "CPU","CPU", "Speakers",pd.NaT], "Unit_Price":[500,200, 5000.235, 10000.550, 10000.550, 250.50,None], "No_Of_Units":[5,5, 10, 20, 20, 8,pd.NaT], "Available_Quantity":[5,6,10,"Not Available","Not Available", pd.NaT,pd.NaT], "Available_Since_Date":['11/5/2021', '4/23/2021', '08/21/2021','09/18/2021','09/18/2021','01/05/2021',pd.NaT] } df = pd.DataFrame(data) df
The dataframe will look like this:
And just like that we've created our sample dataframe.
After each drop operation, you'll print the dataframe by using
df which will print the dataframe in a regular
HTML table format.
You can read here about how to Pretty Print a Dataframe to print the dataframe in different visual formats.
Next, you'll learn how to drop a list of rows in different use cases.
How to Drop a List of Rows by Index in Pandas
You can delete a list of rows from Pandas by passing the list of indices to the
drop() method.
df.drop([5,6], axis=0, inplace=True) df
In this code,
[5,6]is the index of the rows you want to delete
axis=0denotes that rows should be deleted from the dataframe
inplace=Trueperforms the drop operation in the same dataframe
After dropping rows with the index 5 and 6, you'll have the below data in the dataframe:
This is how you can delete rows with a specific index.
Next, you'll learn about dropping a range of indices.
How to Drop Rows by Index Range in Pandas
You can also drop a list of rows within a specific range.
A range is a set of values with a lower limit and an upper limit.
This may be useful in cases where you want to create a sample dataset exlcuding specific ranges of data.
You can create a range of rows in a dataframe by using the
df.index() method. Then you can pass this range to the
drop() method to drop the rows as shown below.
df.drop(df.index[2:4], inplace=True) df
Here's what this code is doing:
df.index[2:4]generates a range of rows from 2 to 4. The lower limit of the range is inclusive and the upper limit of the range is exclusive. This means that rows 2 and 3 will be deleted and row 4 will not be deleted.
inplace=Trueperforms the drop operation in the same dataframe
After dropping rows within the range 2-4, you'll have the below data in the dataframe:
This is how you can drop the list of rows in the dataframe using its range.
How to Drop All Rows after an Index in Pandas
You can drop all rows after a specific index by using
iloc[].
You can use
iloc[] to select rows by using its position index. You can specify the start and end position separated by a
:. For example, you'd use
2:3 to select rows from 2 to 3. If you want to select all the rows, you can just use
: in
iloc[].
This may be useful in cases where you want to split the dataset for training and testing purposes.
Use the below snippet to select rows from 0 to the index 2. This results in dropping the rows after the index 2.
df = df.iloc[:2] df
In this code,
:2 selects the rows until the index 2.
This is how you can drop all rows after a specific index.
After dropping rows after the index 2, you'll have the below data in the dataframe:
This is how you can drop rows after a specific index.
Next, you'll learn how to drop rows with conditions.
How to Drop Rows with Multiple Conditions in Pandas
You can drop rows in the dataframe based on specific conditions.
For example, you can drop rows where the column value is greater than X and less than Y.
This may be useful in cases where you want to create a dataset that ignores columns with specific values.
To drop rows based on certain conditions, select the index of the rows which pass the specific condition and pass that index to the
drop() method.
df.drop(df[(df['Unit_Price'] >400) & (df['Unit_Price'] < 600)].index, inplace=True) df
In this code,
(df['Unit_Price'] >400) & (df['Unit_Price'] < 600)is the condition to drop the rows.
df[].indexselects the index of rows which passes the condition.
inplace=Trueperforms the drop operation in the same dataframe rather than creating a new one.
After dropping the rows with the condition which has the
unit_price greater than 400 and less than 600, you'll have the below data in the dataframe:
This is how you can drop rows in the dataframe using certain conditions.
Conclusion
To summarize, in this article you've learnt what the
drop() method is in a Pandas dataframe. You've also seen how dataframe rows and columns are labelled. And finally you've learnt how to drop rows using indices, a range of indices, and based on conditions.
If you liked this article, feel free to share it. | https://www.freecodecamp.org/news/drop-list-of-rows-from-pandas-dataframe/ | CC-MAIN-2021-25 | refinedweb | 1,111 | 71.34 |
#include <CBD8201.h>
class CBD8210 : public CCamacModule {
CBD8210(int branch);
const bool Xtest();
const bool Qtest();
const bool TimedOut();
const bool BranchDemand();
void MNoX(bool fSet = true);
void MTo(bool fSet = true);
void MLAM(bool fSet = true);
void MIT2(bool fSet = true);
void MIT4(bool fSet = true);
void MIT4(bool fSet = true);
const bool IT2();
const bool IT4();
const unsigned short ReadCsr();
void WriteCsr(unsigned short nMask);
void WriteIFR(unsigned short nMask);
const unsigned short ReadBTB();
const unsigned long ReadGl();
const void InitBranch();}
The
CBD8210 class encapsulates the
functions offered by the register set of the CES CBD8210 VME CAMAC parallel
branch driver module. This module is an obsolete module. Newer applications
should use the Wiener VC32/CC32 module pair, or better yet, phase out of CAMAC
altogether.
Since this class is derived from legacy support where only a single VME crate was
allowed, multiple VME crate support is a bit whacky. Each VME crate can have up
to 8 branches. The branch number is determined by:
vme_crate*8 + branch_selector where
vme_crate is the VME crate number the CES CBD8210 is
installed in and
branch_selector is the branch number
selected on the module's front panel.
Constructor, creates a new CBD8210 module for
branch number
b. See
DESCRIPTION above for more information about the branch number.
Tests the X response of the last operaration on this branch. An X response is true if the addressed module has accepted the function. Usually this is the case if there is a live module at the addressed location.
Checks the Q response of the last operation. The Q response is used for two purposes in general. If the operation is not a test, the Q indicates successful completion of the operation, otherwise it indicates the result of the test. For example, a module may not accept some functions int the busy state. Q may be false if one such function was attempted and the module was busy. For example: most modules have function codes for testing their LAM (Look At Me). These functions will return Q true if the LAM is active, and false otherwise.
Returns true if the last operation on this branch timed out. In general this can only happen if the CAMAC crate being addressed was off, or unplugged from the branch highway, or has a failing controller.
Returns true if there's a branch demand present. Branch demands are used to indicate the presence of a LAM in a CAMAC crate on the branch that has its LAM added to the set of graded LAM demands.
Sets/clears the controller's MNOX bit in the
control/status
register according to the value of
fSet.
Normally, the branch highway driver will interrupt
if an operation is performed and an X is not present. Setting
this bit prevents that interrupt.
This sets or clears the MTO bit in the control/status register of the module. When MTO is clear, branch timeouts result in an interrupt, when set, this interrupt is inhibited.
Sets or clears the MLAM bit in the
module's control/status register.
according to the value of
fSet.
If this bit is clear,
Branch Demands (LAMs from a crate) will result in a VME bus
interrupt. If set, interrupts will not occur.
Sets or clears the MIT2 bit in the
module control/status register according to the value of
fSet.
If the bit is clear, a NIM pulse on the IT2 input of the module
results in a VME interrupt. If set, no interrupt is generated.
Note that the pulse latches a status bit which can be read
via
IT2.
Sets or clears the control status register's MIT4
bit according to the state of
fSet
When clear, the IT4 input causes an interrupt. When set, IT4
does not cause an interrupt, but latches a status bit that can
be read via the
IT4 function.
Tests the state of the IT4 bit in the interrupt status register.
If an IT4 input has been latched this will return
true otherwise false.
To clear the latched IT4 status, you must write the IT4 bit
to the IFR (see
WriteIFR).
Reads the contents of the module control status register. See the manual for the CBD 8210 for information about the layout of this register.
Writes
nMask to the module control status
register. See the hardware manual for the module for more information
about the layout of this register.
Writes the
nMask parameter to the module's
IFR register. The layout of this register
is documented in the CES CBD8210 hardware manual. Note, however
that this register is used to clear latched IT2
and IT4 inputs.
Reads the branch timing register from the module. The BTB register is a bit mask with a bit for each crate. It allows you to determine which crates are on/offline.
Reads the graded LAM mask for the branch. See the CES CBD8210 manual for more information about graded lams and the layout of this register.
Performs a Branch Zero (BZ) on the branch.
The example below access branch 0, and does a BZ:
The CES CBD 8210 hardware manual (online at the NSCL at: | http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.0/r25338.html | CC-MAIN-2017-30 | refinedweb | 858 | 72.76 |
Since sbt supports creating subprojects in a plugin, crossProject may become unnecessary in the future.
crossProject
I’m assuming he means that in Scala, any public method newly added breaks
code that relied on that method not yet existing to trigger an implicit
conversion.
For example:
class A
class B { def yo: Unit = println(“Yo!”)
object B {
implicit def aToB(a: A): B = new B*
val a = new A*
a.yo // prints “Yo!”*
}
a.yo, because A does not have yo, triggers implicit conversion to B, which
has yo, so this prints “Yo!”. This breaks if some one adds a method to A
called yo, because now there will be no implicit conversion to B:
class A { def yo: Unit = println(“Yeah!”) }
I’m just going to put this right here…
This is valid. And we should not discount this viewpoint.
You said that earlier. I took note of it. I don’t think I have said anything in this thread that should even suggest that I am discounting it.
On the contrary, my very first post was all about getting more understanding of what makes it a burden to support Scala.js. While on that topic, I am still waiting for answers (any answer, actually) to the question I asked in that post. Without those answers, I cannot begin to think about how to discuss possible solutions.
[quote=“jvican, post:18, topic:1166”].
Why people don’t know about them or don’t use them correctly is another discussion.
[/quote]
Sure they may not be accessible but they still create issues when they are being patched as sjrd noted here, which brings me to my next point.
Honestly, I don’t see what the problem is. If settings are private for a project, the shouldn’t be patchd from another project at all. Obviously this may not be strictly correct from how SBT is designed, but at least to my eyes its semantically more correct with how the build tool should be treated.
I mean effectively you are saying that a Scala backend is unable properly encapsulate settings only specific to their platform (which shouldn’t be set from other platforms)
I am not trying to blame SBT for the sakes of blaming SBT. Its probably completely possible to represent this correctly as you are stating. All I am trying to state is that right now, the maintenance for library maintainers is starting to get silly and I don’t think its useful to just stick our heads in the sand.
I mean people give crap about Spark using maven and having to manually patch .pom files to cross build Scala versions and now I am basically having to end up doing the same thing with SBT but with environment variables.
.pom
Afaik CBT doesn’t rely on anything specifically, (are you talking about source compatibility for CBT itself or for the projects its building?). What I do know is that in CBT the entire DAG is represented in Scala. Like you said maybe the issue with SBT is the tight coupling
Like you want console (JVM-only) and fastOptJS (JS-only) to be private? This doesn’t make any sense.
console
fastOptJS
I get the sense that this point, multiple backends make maintaining libraries in the Scala system more difficult, hasn’t been accepted as a valid point. As a result, I see a lot of defending the status quo rather than contributing to brainstorming about what we could do differently.
I think what would be more constructive is if we could instead put our heads together to try to imagine ways to move more burden away from library maintainers. Right now I see only two individuals making suggestions in this direction. I think more of us could chip in here.
My point was that the tight coupling has nothing to do with sbt — i.e. it will happen in CBT too.
What I meant by “relying on” is that CBT’s fundamental rule is to cross-compile the build before using it. For cross-compiling it, you have source dependencies on both the CBT version you build and the plugins.
I don’t think I have said anything in this thread that should even suggest that I am discounting it.
I don’t think I have said anything in this thread that should even suggest that I am discounting it.
I’d like to say your posts have been the most constructive posts in the whole thread. I don’t see why people want to so frantically start scrambling for solutions even when there is not even consensus on what the basic facts on the ground are, nevermind what the problem to solve is.
Seems like a recipe for spending lots of effort, not actually solving any problem, and ending up regretting the choices made.
For what it’s worth my questions, trying to dig into the unhappiness and surface the actual problems, have been more-or-less ignored as well. Honestly feels like they’ve been “tossed aside” or “discounted”. But perhaps this just isn’t the right group or place for a solid 5-why’s/root-cause analysis.
I completely agree. I feel like at this point we need to take an approach
that focuses on specifics, its almost too tough to talk about the actual
problem in general terms. For instance, Travis brought up build time
increases. And he’s right that sucks, so maybe the path to a better
maintainer experience is finding these paper cuts and just being methodical
about crushing them.
-Dan
If what you see from my posts is defending the status quo, look again. I have so far done nothing but
You can re-read every single paragraph I have written, and it fits in one of those three buckets. Now if those buckets are wrong to talk about to begin with, then I really don’t know what to do.
If the premise of the whole thing is “libraries maintainers must not touch Scala.js whatsoever”, then the answer is simple: stop touching it, and let a Scala.js user fork and publish the Scala.js version of your library. It has been done before; it works.
That definitely “moves more burden away from library maintainers”. But I hope we can do better than that: reduce the overall burden altogether. But for that we need to understand exactly what this burden is.
This is actually what we wanted to do in the beginning, but we couldn’t because sbt broke support for pattern-extracting (lazy) vals at the top-level of .sbt files in sbt 0.13.7. See the release notes.
.sbt
I already linked 2 issues demonstrating my personal problems, and I have given plenty of feedback in other threads regarding issues. You can also look at my comment history on github, I am not just complaining because I have nothing better to do.
To reiterate (and in no particular order and related to this topic)
in ThisBuild
I mean the thing is we had a similar issue before in another area, for example if you had a library that is being published for 2.10, 2.11 and 2.12 which depended on a lot of other modules, and some of those modules didn’t have a version of Scala for 2.12 you would have to just wait until that library was published. This is now solved with and is included in default in SBT 1.0.0.
So the way I see it, we either make it a lot easier to deal with build matrix’s or we explore other options like source distributions.
Completely agree here
I mean the issue have already been stated numerous times, what exactly is unclear?
This isn’t the premise and also the solution isn’t an ideal sustainable one. I don’t think that confusing people with having a different package/artifact name just for Scala.js is a good idea
You most certainly can: use an environment variable as detailed in the release notes of Scala.js 1.0.0-M1. It might not be ideal, as opposed to ++, but it’s definitely possible and has very little boilerplate. Also, things like Travis have very good support for running a matrix with different environment variables.
++
Again, I acknowledge that it is not ideal compared to ++, but I cannot take “You cannot right now do this” as valid criticism. I’m willing to discuss how we can improve this, but I first need someone starting to give at least a starting idea on how this could be done better, taking into account the technical constraints. For example, just saying “there should be a command **1.0.0-M1 in sbt to switch to Scala.js 1.0.0-M1” does not take technical constraints into account, because you need a different sbt plugin to be on the classpath of your build to be able to switch the version of Scala.js, and a command does not have the power to change the classpath of the build. I also have several times heard people suggesting that sbt-scalajs should be a tiny shell independent of the actual version of Scala.js, and load the linker and everything else reflectively in a ClassLoader. This is viable, though quite complex, until you start considering other sbt plugins depending on sbt-scalajs, such as scalajs-bundler or sbt-web-scalajs.
**1.0.0-M1
ClassLoader
scalajs-bundler
sbt-web-scalajs
I will say it again: I don’t have any idea to solve this, but if someone does, I am all ears.
OK, that’s the first time I hear about this one. Let’s add it to the list. I’m not sure why it happens. I mean it would obviously take “twice the time” to resolve the setting dependency of a crossProject (which really is 2 projects) vs one project, but from your comment it sounds like this is more than 2x?
project
Is that really related to Scala.js cross-compilation or to sbt multi-projects in general? I used to recommend to people that they always use a commonSettings, except for the very specific case of crossScalaVersions. That was easy to understand and apply (even though maybe not understand why), but apparently it confuses some tools like Ensime. Now my recommendation would be: try in ThisBuild first for any common settings that you have, and if that has no effect for a particular setting, put it in commonSettings. An easy way to test whether it has any effect is not to put in the build, start up sbt, then set it dynamically. set will tell you how many other settings and tasks are affected by the change. If it says 0, chances are it does not have any effect.
commonSettings
crossScalaVersions
set
I did not comment on this earlier because it seemed to me that the DateTime questions were kind of rhetorical. Of course we can write a standard Scala date-time API. Why don’t we? Probably because it’s too big an endeavor to undertake given that java.time already exists and is “good enough”. So yes, we should definitely port java.time to all the platforms, and there are various efforts in this direction. The fact that there are 3 competing solutions is a real bummer. We first made scalajs-java-time, but then other people created competing solutions instead of contributing to the existing one. At that point I don’t know what to do anymore. Am I supposed to take an authoritarian position and declare one of the alternatives the right one, and encourage everyone to shun the other ones? What if the other ones do have some parts of the API that are not supported in the right one yet?
java.time
scalajs-java-time
I’m genuinely asking those questions. The java.time situation bothers me.
To this particular point:
I don’t understand because, as mentioned above, the JDK 8 java.time API is ported (3 times).
So, you’re saying that since sbt-doge solved that issue before, which was “similar”, why is it that we haven’t solved platform cross-compilation yet? I can answer that. I very much wish myself that we had solved that already.
sbt-doge
The fact is that sbt-doge had a much easier problem to solve. For starters, it was already definitely possible to publish some modules with 2.10 and 2.11 support, while other modules supported 2.12 too. You just could not use +, and instead had to use ++ “manually”. ++ has always worked, way before sbt-doge came. I know because I have used it since the beginning of Scala.js supporting several Scala versions, and I am still doing so across 18 minor versions of Scala for about 20 modules in Scala.js, that support different subsets of Scala versions.
+
All that sbt-doge solved was to take into account crossScalaVersions at the project level rather than at the build level, which means that it made + work in addition to ++ in such situations. It certainly was a worthwhile improvement, but not a fundamental one.
For the Scala.js cross-compilation issue, even though it can appear as “similar” on a very cursory glance, it is very different. ++ itself doesn’t work, never mind +. And I have commented above about why we could not make ++ work for Scala.js cross-compilation (or at least haven’t managed to do so).
If someone comes up with a solution that makes some kind of ++ command work for Scala.js, I will jump on the solution and distribute it right away. I haven’t been able to come up with a solution for that myself, though. I hope someone finds something where I failed.
It is not ideal for downstream users of libraries, but it is ideal for the Scala/JVM library maintainer. You have to give that solution that much credit.
Package and artifact names can be the same, only the groupID has to change. This could potentially be addressed if the JVM maintainer allows a trusted JS maintainer to publish the Scala.js artifacts in the same groupID.
groupID
I know you can work around it right now, most problems you can work around. The point I am making is that its not a real solution, and its also not sustainable (because if I also want to support scala-native, I know have these permutations of environment variables I have to deal which can also get more complicated when taking into account scala versions)
I know, I am just pointing out that its a problem and that we should at least investigate a way to solve it
Like with most benchmarking its hard to quantify this (whether its related directly to crossprojects or just large SBT projects in general) but with the sharing of dependencies amongst cross projects and settings, it does really seem to slow down. SBT 1.0.0 may have helped here, but I can’t use it due to lack of Intellij support
Yeah this is precisely what I am talking about. I used to use commonSettings until the folks at Ensime complained that it was the “wrong approach” and now it sees to be more of a trial and error thing.
I mean the thing is, why cant we just take what is here and namespace it for scala? I mean the difficulty here is takes co-operation both from Scala guys and the community. This is what is frustrating, the solution is already here but we ended up reimplementing it 3 or 4 times
scala
Yes and this is the problem, it should be ported once and in the scala namespace. We have 4 copies of the same thing with slight variations, this is the issue.
I guess this is what is frustrating, there isn’t an ideal solution. Also I am not necessarily advocating for something exactly the same as crossScalaVersions, my point is that tinkering around with environment variables (or dynamically patching build files) is not a real solution, its a hack.
Sure, I just see it as a bandaid to a real problem.
What would be different if it were in the scala namespace? AFAICT, it would still be exactly the same situation, wouldn’t it?
To me it doesn’t appear as a hack but a proper solution (although potentially not as convenient as something else). I’m not sure what in this strategy you consider a hack. Is it the usage of environment variables per se? If yes, don’t you think most of Linux is a hack? Where is the line where things go from well designed to hack in this scenario, according to you?
XKCD summarized this one nicely. I don’t see that adding it to the scala namespace does anything other than confusing things worse. There is a standard; it is rather decent and widely used; we shouldn’t be changing that standard in incompatible ways. The implementation situation is a mess, yes, but that’s no reason to abandon the standard itself…
It sends a signal that “this is meant this is the standard date time library you are meant to be using”. Also note that the intention is for it to be provided, by default, on all scala platforms.
Its a hack from the point of view that you are using environment variables to bypass a missing capability that the software should provide. Cross building is basic functionality that is expected of a build tool. I mean if we take this argument to its extension, if we supposedly had to set environment variables to specify library dependencies because of a shortcoming of the build tool, many people would agree that such a solution is a hack.
OK, fair enough. I was confused because what you describe is what I would call a workaround, not a hack.
To me, a hack is when you rely on the implementation details of a component rather than its public specification/contract; which exposes you to breakages if the component you are using changes its implementation in the future, even though it still complies with its specification (which it is allowed to do). Examples of hacks in the Scala.js codebase, by my definition, include here, here and here (note that these hacks don’t leak outside of the codebase).
Thanks for clarifying!
But yes, my main point is that this workaround isn’t sustainable as more Scala versions/backends get released | https://contributors.scala-lang.org/t/alternative-scalajs-scala-native-distribution-mechanisms/1166?page=2 | CC-MAIN-2017-43 | refinedweb | 3,106 | 71.55 |
Hot questions for Using Neural networks in for loop
Question:
I have been learning about ANN but the book I'm reading has examples in Python. The problem is that I have never written in Python and these lines of code are too hard for me to understand:
sizes = [3,2,4] self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])]
I read some things about it and found out that the
randn() function returns an array with
y elements and
x dimensions populated with random numbers between 0 and 1.
zip() connects two arrays into one.
sizes[:-1] returns the last element and
sizes[1:] return the array without its first element.
But with all of this I still can't explain to myself what would this generate.
Answer:
sizes[:-1] will return the sublist
[3,2] (that is, all the elements except the last one).
sizes[1:] will return the sublist
[2,4] (that is, all the elements except the first one).
zip([a,b], [c,d]) gives
[(a,c), (b,d)].
So zipping the two lists above gives you
[(3,2), (2,4)]
The construction of weights is a list comprehension. Therefore this code is equivalent to
weights = [] for x,y in [(3,2), (2,4)]: weights.append(np.random.randn(y, x))
So the final result would be the same as
[ np.random.randn(2,3), np.random.randn(4,2) ]
Question:
I'm following the tutorial at this link:
I'm new to neural networking and I'm trying to edit the example in the above tutorial to match my problem. I'm using multiple regression to find coefficients for 3 different sets of data and I then calculate the rsquared value for each set of data. I'm trying to create a neural network that will change the coefficient value to get the rsquared value as close to 100 as possible.
This is how I establish the coefficient and find the rsquared value for that coefficient. All 3 coefficients use these same methods:
Calculations calc = new Calculations(); Vector<double> lowRiskCoefficient = MultipleRegression.QR( Matrix<double>.Build.DenseOfColumnArrays(lowRiskShortRatingList.ToArray(), lowRiskMediumRatingList.ToArray(), lowRiskLongRatingList.ToArray()), Vector<double>.Build.Dense(lowRiskWeekReturnList.ToArray())); decimal lowRiskShortCoefficient = Convert.ToDecimal(lowRiskCoefficient[0]); decimal lowRiskMediumCoefficient = Convert.ToDecimal(lowRiskCoefficient[1]); decimal lowRiskLongCoefficient = Convert.ToDecimal(lowRiskCoefficient[2]); List<decimal> lowRiskWeekReturnDecimalList = new List<decimal>(lowRiskWeekReturnList.Count); lowRiskWeekReturnList.ForEach(i => lowRiskWeekReturnDecimalList.Add(Convert.ToDecimal(i))); List<decimal> lowRiskPredictedReturnList = new List<decimal>(lowRiskWeekReturnList.Count); List<decimal> lowRiskResidualValueList = new List<decimal>(lowRiskWeekReturnList.Count); for (int i = 0; i < lowRiskWeekReturnList.Count; i++) { decimal lowRiskPredictedValue = (Convert.ToDecimal(lowRiskShortRatingList.ElementAtOrDefault(i)) * lowRiskShortCoefficient) + (Convert.ToDecimal(lowRiskMediumRatingList.ElementAtOrDefault(i)) * lowRiskMediumCoefficient) + (Convert.ToDecimal(lowRiskLongRatingList.ElementAtOrDefault(i)) * lowRiskLongCoefficient); lowRiskPredictedReturnList.Add(lowRiskPredictedValue); lowRiskResidualValueList.Add(calc.calculateResidual(lowRiskWeekReturnDecimalList.ElementAtOrDefault(i), lowRiskPredictedValue)); } decimal lowRiskTotalSumofSquares = calc.calculateTotalSumofSquares(lowRiskWeekReturnDecimalList, lowRiskWeekReturnDecimalList.Average()); decimal lowRiskTotalSumofRegression = calc.calculateTotalSumofRegression(lowRiskPredictedReturnList, lowRiskWeekReturnDecimalList.Average()); decimal lowRiskTotalSumofErrors = calc.calculateTotalSumofErrors(lowRiskResidualValueList); decimal lowRiskRSquared = lowRiskTotalSumofRegression / lowRiskTotalSumofSquares;
This is the example that performs the training and I'm currently stuck on how to change this example to match what I'm trying to do.; net.PerceptionLayer[1].Output = low; net.Pulse(); hl = net.OutputLayer[0].Output; net.PerceptionLayer[0].Output = low; net.PerceptionLayer[1].Output = high; net.Pulse(); lh = net.OutputLayer[0].Output; net.PerceptionLayer[0].Output = high; net.PerceptionLayer[1].Output = high; net.Pulse(); hh = net.OutputLayer[0].Output; } while (hh > mid || lh < mid || hl < mid || ll > mid); MessageBox.Show((count*100).ToString() + " iterations required for training"); }
How do I use this information to create a neural network to find the coefficient that will in turn have a rsquared value as close to 100 as possible?
Answer:
Instead of building one, you can use Neuroph framework built in the .NET by using the Neuroph.NET from here
It is a light conversion of the original Neuroph they did for the JAVA platform.
Hope this helps you.
Question:
I have a working neural network loop so I can run neural networks using a predetermined number of nodes in my hidden layer ('nodes_list'). I then calculate the area under the ROC curve for each number of nodes and put this in a list ('roc_outcomes') for plotting purposes. However, I would like to loop over this loop 5 times to get an average area under the ROC curve for each of the three models (model 1: 20 nodes in hidden layer, model 2: 28 nodes in hidden layer, model 3: 38 nodes in hidden layer). This works fine when I am only trying it on one model, but when I iterate over more than one model instead of iterating over model 1 5 times, then model 2 5 times, then model 3 5 times....it iterates over model 1, then model 2, then model 3, and it does this 5 times. The purpose of this nested loop is for me to iterate over each neural network model 5 times, put the area under the ROC curve for each iteration into a list, calculate a mean of that list, and put the mean into a new list. Ultimately, I would like to have a list of three numbers (1 for each model) that is the mean area under the ROC curve for the 5 iterations of that model. Hopefully, I explained this well. Please ask for any clarification.
Here is my code:
nodes_list = [20, 28, 38] # list with number of nodes in hidden layer per model roc_outcomes = [] # list of ROC AUC for i in np.arange(1,6): for nodes in nodes_list: # Add first layer model.add(Dense(units=n_cols, activation='relu', input_shape=(n_cols,))) # Add hidden layer model.add(Dense(units=nodes, activation='relu')) # Add output layer model.add(Dense(units=2, activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit model model.fit(X, y, validation_split=0.33, epochs=epochs, callbacks=early_stopping_monitor, verbose=True) # Get predicted probabilities pred_prob = model.predict_proba(X)[:,1] # Calculate area under the curve (logit_roc_auc) logit_roc_auc = roc_auc_score(y[:,1], pred_prob) # Append roc scores to the roc_outcomes list roc_outcomes.append(logit_roc_auc) # Get the mean of that list mean_roc = np.mean(roc_outcomes) # Append to another list mean_roc_outcomes = [] mean_roc_outcomes.append(mean_roc)
Answer:
Construct your loops like this:
for nodes in node_list: for i in range(0,5): #do your stuff
example:
myList = ['a', 'b', 'c'] for item in myList: for i in range(0,5): print(item, end=", ")
output:
a, a, a, a, a, b, b, b, b, b, c, c, c, c, c,
Question:
Attempting to write code to generate layers in a Neural Network, and i'm just trying to work out how to assign a value to a variable belonging to the Neuron itself. The structure of a standard Neuron in my code is contained in a class called XORNeuron, and i'm calling this class in a FOR loop in order to generate a certain amount of neurons depending on how many neurons the layer is assigned when it is created.
I'm using an Array to do this, however I was wondering how I would assign the number of inputs the layer is told each neuron it contains has. The number of neurons and number of inputs are both arguments provided upon the Layer Constructor being called, meaning new layers can be created easily and tweaked with regards to their size and relative number of inputs.
The weights are all automatically generated for each input in a FOR loop in the Neuron class itself, depending on a variable the Neuron class holds called "numInputs". I'm attempting to write a FOR loop that will generate a new Neuron instance for the number of neurons the layer is told it holds, and then assign the number of inputs to the "numInputs" variable the Neuron class holds, so it will be able to generate the weights correctly.
My code is as such:
public class XORlayer { // Create a constructor to create layers of neurons. // Means we only have to set it up once and can feed it // settings from then on for tweaking. XORlayer(int numNeurons,int inpNum) { XORNeuron[] layerLength = new XORNeuron[numNeurons]; for(int neuCount = 1; neuCount <= numNeurons; neuCount++) { layerLength[neuCount-1] = new XORNeuron(); } } }
Answer:
Either by calling a setter in the created neuron
XORNeuron[] layerLength = new XORNeuron[numNeurons]; for(int neuCount = 0; neuCount < numNeurons; neuCount++) { layerLength[neuCount] = new XORNeuron(); layerLength[neuCount].setNumInput(inpNum); } }
or by adding the input count to the neuron constructor, so you can do
layerLength[neuCount] = new XORNeuron(inpNum);
(Note: I changed the array indexing to a 0-based for loop, since that's idiomatic for Java).
Question:
I have 53 xls table (ch_1, ch_2, ...) and then use them as a input for Neural Network. After that write the NN result in a new xls and csv.
clc clear all files=dir('*.xls'); for i=1:length(files(:,1)) aa=xlsread(files(i).name); fprintf('step: %d\n', i); datanameXls = ['channel_' num2str(i) '.xls']; datanameCsv = ['channel_' num2str(i) '.csv']; a17=aa(:,1); b17=aa(:,4); p=size(a17); c17=zeros(144,31); % Create a Fitting Network hiddenLayerSize = 10; net = fitnet(hiddenLayerSize); % Setup Division of Data for Training, Validation, Testing net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100; % Train the Network [net,tr] = train(net,inputs,targets); % Test the Network outputs = net(inputs); A= zeros(4464, 2); A = [o, outputs']; A(A<0)=0; csvwrite(datanameCsv, A); fprintf('csv is written \n'); xlswrite(datanameXls, A); fprintf('xls is written \n'); end
The problem is: when i try this programm with one, two till 9 table, the Result which i save through xlswrite are true, but when i try it with 52 table, i get a false table because for example ch_1 is overwritten whith ch_10. Any IDEA???
Answer:
I solved my problem. 'dir' read first ch_10 to ch_19 then ch_1. I did rename all my files and it works now correctly.I did the following to rename all files:
clc clear all files = dir('*.xls'); for k=1:length(files(:,1)) oldFileName = sprintf('ch_%dMonth_1.xls',k); newFileName = sprintf('%03d.xls',k); movefile(oldFileName,newFileName); end
Question:
I'm training a neural network and want my program to feed forward the first 10 examples, then backprop, then loop over next 10 examples and backprop and so on.
Right now I have a code that loops over my whole data set for 5 epochs, but it would be better if it looped in small batches (also for 5 epochs for example).
My question is how to make a loop, based on the one I have, so that it loops over the first 10
i, then does the
Net.backward(rate, mse) bit, resets the error sum
sum_error = 0 and then loops over the next 10
i and so on, for the whole dataset (I have 800 examples). I don't know how to achieve that. Should I insert some kind of
i counter, like
i = i+1?
for j in range(5): for i, pattern in enumerate(X): Net.net_error(y[i], X[i]) sum_error = sum_error + np.square(Net.net_error(y[i],X[i])) mse = (sum_error) / (len(X)) print(f" # {str(j)}{mse}") Net.backward(rate, mse) sum_error = 0
The code that is responsible for
net_error part:
def feed_forward(self, X): self.z1 = np.dot(X, self.input_to_hidden1_w) self.z1_a = self.activation(self.z1) self.z2 = np.dot(self.z1_a, self.hidden1_to_hidden2_w) self.z2_a = self.activation(self.z2) self.z3 = np.dot(self.z2_a, self.hidden2_to_output_w) self.output = self.activation(self.z3) return self.output def net_error(self, y, X): net_error = y - self.feed_forward(X) return net_error
Answer:
as to your original question, I think you might want to do something like:
num_epochs = 5 batch_size = 10 for epoch in range(num_epochs): perm_idx = np.random.permutation(len(X)) for ix in range(0, len(perm_idx), batch_size): batch_indicies = perm_idx[ix:ix+batch_size] sum_error = 0 for i in batch_indicies: sum_error += np.square(Net.net_error(y[i], X[i])) Net.backward(rate, sum_error / len(X))
note that I'm using
permutation to get random subsets of the data in each batch, which might help with biases
Question:
The increase in network size is not the cause(problem)
here is my code
for i in [32, 64, 128, 256, 512]: for j in [32, 64, 128, 256, 512]: for k in [32, 64, 128, 256, 512]: for l in [0.1, 0.2, 0.3, 0.4, 0.5]: model = Sequential() model.add(Dense(i)) model.add(Dropout(l)) model.add(Dense(j)) model.add(Dropout(l)) model.add(Dense(k)) model.add(Dropout(l)) model.compile(~) hist = model.fit(~) plt.savefig(str(count) + '.png') plt.clf() f = open(str(count) + '.csv', 'w') text = ~ f.write(text) f.close() count+=1 print() print("count :" + str(count)) print()
I started
count to 0
when
count is 460~ 479 the epoch time is
Train on 7228 samples, validate on 433 samples Epoch 1/10 - 2254s - loss: 0.0045 - acc: 1.3835e-04 - val_loss: 0.0019 - val_acc: 0.0000e+00 Epoch 2/10 - 86s - loss: 0.0020 - acc: 1.3835e-04 - val_loss: 0.0030 - val_acc: 0.0000e+00 Epoch 3/10 - 85s - loss: 0.0017 - acc: 1.3835e-04 - val_loss: 0.0016 - val_acc: 0.0000e+00 Epoch 4/10 - 86s - loss: 0.0015 - acc: 1.3835e-04 - val_loss: 1.6094e-04 - val_acc: 0.0000e+00 Epoch 5/10 - 86s - loss: 0.0014 - acc: 1.3835e-04 - val_loss: 1.4120e-04 - val_acc: 0.0000e+00 Epoch 6/10 - 85s - loss: 0.0013 - acc: 1.3835e-04 - val_loss: 3.8155e-04 - val_acc: 0.0000e+00 Epoch 7/10 - 85s - loss: 0.0012 - acc: 1.3835e-04 - val_loss: 4.1694e-04 - val_acc: 0.0000e+00 Epoch 8/10 - 85s - loss: 0.0012 - acc: 1.3835e-04 - val_loss: 4.8163e-04 - val_acc: 0.0000e+00 Epoch 9/10 - 86s - loss: 0.0011 - acc: 1.3835e-04 - val_loss: 3.8670e-04 - val_acc: 0.0000e+00 Epoch 10/10 - 85s - loss: 9.9018e-04 - acc: 1.3835e-04 - val_loss: 0.0016 - val_acc: 0.0000e+00
but when I restart pycharm and
count is 480
epoch time is
Train on 7228 samples, validate on 433 samples Epoch 1/10 - 151s - loss: 0.0071 - acc: 1.3835e-04 - val_loss: 0.0018 - val_acc: 0.0000e+00 Epoch 2/10 - 31s - loss: 0.0038 - acc: 1.3835e-04 - val_loss: 0.0014 - val_acc: 0.0000e+00 Epoch 3/10 - 32s - loss: 0.0031 - acc: 1.3835e-04 - val_loss: 2.0248e-04 - val_acc: 0.0000e+00 Epoch 4/10 - 32s - loss: 0.0026 - acc: 1.3835e-04 - val_loss: 3.7600e-04 - val_acc: 0.0000e+00 Epoch 5/10 - 32s - loss: 0.0021 - acc: 1.3835e-04 - val_loss: 4.3882e-04 - val_acc: 0.0000e+00 Epoch 6/10 - 32s - loss: 0.0020 - acc: 1.3835e-04 - val_loss: 0.0037 - val_acc: 0.0000e+00 Epoch 7/10 - 32s - loss: 0.0021 - acc: 1.3835e-04 - val_loss: 1.2072e-04 - val_acc: 0.0000e+00 Epoch 8/10 - 32s - loss: 0.0019 - acc: 1.3835e-04 - val_loss: 0.0031 - val_acc: 0.0000e+00 Epoch 9/10 - 33s - loss: 0.0018 - acc: 1.3835e-04 - val_loss: 0.0051 - val_acc: 0.0000e+00 Epoch 10/10 - 33s - loss: 0.0018 - acc: 1.3835e-04 - val_loss: 3.2728e-04 - val_acc: 0.0000e+00
I just started it again, but the epoch time was faster.
I don't know why this happened.
In the Python 3.6 version, I use tensorflow-gpu 1.13.1 version, and Cuda uses 10.0 version. OS is a Windows 10 1903 pro version and OS build uses 18362.239 Pycharm uses a 2019.1.1 community version.
I just used the for loop, and I wonder why this happened.
I changed the number of units in the for loop.
I also saved the figure with a plt.savefig, and saved the data in .csv format.
And I also ask how to solve it.
Answer:
You should use:
from keras import backend as K` K.clear_session()
before creating the model (i.e.
model=Sequential()). That's because:
Ops are not garbage collected by TF so you always add more node to the graph.
So if we don't use
K.clear_session, then memory leak occurs.
Thanks to @dref360 at keras.io in Slack. | https://thetopsites.net/projects/neural-network/for-loop.shtml | CC-MAIN-2021-31 | refinedweb | 2,701 | 61.22 |
Hi,
Yes, I'm new to both Python programming language and mod_python and i found the import_module() paragraph in the manual not really easy to eat.
What I understood is that, a such function is necessary to reload modified modules used by the webserver because the interpreter lives as long as the apache child process lives.
Okay, but what was not clear is : does mod_python overrides the Python's statement "import" to make the same things than import_module()?
If yes, why this function?
I made a test and i modified a module imported by "import" statement in my handler module : the changes where taken in count without restarting apache
Maybe the difference is in the treatment of packages.. The import_module() breaks all namespace stuff with packages (we could not do the same as "import package.subpackage.module" and then use stuff with "package.subpackage.module.function()" notation, although this doesn't seem to be recommanded in classic python softwares)
As I am new to Python, i was first thinking of designing my website like a classic python application, with packages and stuff, but tell me i should rather think of arranging my modules in classic directories and use import_module() each time.
Thanks | http://modpython.org/pipermail/mod_python/2008-August/025546.html | CC-MAIN-2018-34 | refinedweb | 202 | 55.27 |
ASP.NET Micro Caching - Benefits of a One-Second Cache
Date Published: 03 May 2004
Introduction
I.
Server and Test Setup
The test server had the following specifications:
- 1GHz P3 Processor
- 512MB RAM
- Windows Server 2003
- SQL Server 2000
To test the page, I used Application Center Test (ACT), running locally. Everything in this test is running on one box, which I realize has some implications as far as where the bottlenecks in performance will occur. During all tests the CPU was maxed out, indicating that increased performance could have been achieved with a more powerful box and/or by splitting the work between several servers. However, the point of this test was not to achieve the maximum performance possible for this trivial example--it was to measure the effects of a small amount of caching on a simple but high-volume application.
The test script consisted of a single request to my default.aspx page. The script was constructed to use three simultaneous users for five minutes with thirty seconds of "ramp-up" time (to ensure the app had compiled and the sql server was awake and running full speed). ACT, unlike other tools such as LoadRunner, does not let you add "think time" between users' requests without editing the vbscript by hand, so although there are only three users, they are hammering the server because there is no delay between the completion of one of their requests and the initiation of their next request.
Test Page Without Caching
I created a simple ASP.NET page that calls "SELECT * FROM Products" against a local instance of the Northwind database, fills a DataSet with the result, and binds it to a DataGrid. To limit the web server overhead, I disabled ViewState and SessionState on the page. The page's code without caching was as follows:
Default.aspx
<%@ Page language="c#" Codebehind="Default.aspx.cs" AutoEventWireup="false" Inherits="MicroCache._Default" EnableSessionState="false" EnableViewState="false" %> < codebehind performed the logic as follows:
Default.aspx.cs (excerpt)
public class _Default : System.Web.UI.Page { protected System.Web.UI.WebControls.DataGrid DataGrid1; private void Page_Load(object sender, System.EventArgs e) { BindGrid(); } private void BindGrid() { SqlConnection conn = new SqlConnection("server=localhost;database=northwind;integrated security=true"); SqlDataAdapter cmd = new SqlDataAdapter("SELECT * FROM Products", conn); DataSet ds = new DataSet("Products"); cmd.Fill(ds); DataGrid1.DataSource = ds; DataGrid1.DataBind(); } ... }
Results Without Caching
Running the test without any caching on the web server resulted in 16,961 requests and an average requests per second of 56.54. The Time to First Byte (TTFB) and Time to Last Byte (TTLB), which are useful measures of the site's performance from an individual user's perspective, were both 52ms--instant as far as a user is concerned. For most web applications, anything under one second is an acceptable TTLB.
Below is a graph of requests per second over time for the page without caching.
Figure 1
Test Page With Caching
For the second part of the test, I modified the test page by adding one line of code to Default.aspx:
<%@ OutputCache Duration="1" VaryByParam="none" %>
The new Default.aspx then looked like this:
<%@ Page language="c#" Codebehind="Default.aspx.cs" AutoEventWireup="false" Inherits="MicroCache._Default" EnableSessionState="false" EnableViewState="false" %> <%@ OutputCache Duration="1" VaryByParam="none" %> < same test was run against this page.
Results With Caching
Running the test a second time against a page with one second of output caching enabled resulted in a total of 74,157 requests and an average requests per second of 247.19. The TTFB and TTLB were both 11ms. So, compared to the non-cached page, the throughput increased by over 400% and each page took 80% less time to load, five times the performance.
Figure 2 below shows a graph of requests per second over time.
Figure 2
The next figure shows both tests, side-by-side.
Figure 3
Analysis and Summary
Clearly, even just a second's worth of caching can have a big impact on performance for a high-volume application. It's easy to see why this is the case if we look at the database's performance during this test. I ran the test with two performance counters:
SQLServer:SQL Statistics\Batch Requests/sec ASP.NET Apps v1.1.4322\Requests/Sec\myapplication
In the non-caching test, the SQL Server requests/sec counter averaged 56.2 requests per second. In the micro-caching test, it averaged 1.0 request per second. Obviously, the database was not having to do nearly as much work. However, not all of the savings were on the database side. Loading data into a DataSet is relatively expensive as well, as is data binding. Although there aren't any performance counters for these activities specifically, we know they were occurring once per request in the first case, and once per second in the latter case, so we dropped a lot of work from the ASP.NET engine's plate as well.
Typically, the database and the web server will reside on separate boxes. The reduction in database load would clearly benefit the database server, which is a very good thing since, historically, database servers are much more difficult to scale out than web servers. In addition, it will reduce network traffic between the web and database servers, which will further enhance performance, and the reduction in web server load is also helpful, of course, since it reduces how many web servers would be needed (or how powerful each one would need to be).
Summary
Caching is a very useful tool to take advantage of when designing applications. It is not a panacea, and like any tool, it can be used improperly. But even in situations where data needs to be up to the second, adding one second's worth of caching can make a big impact on application performance. Remember to ask your users how much they can tolerate out-of-date data and try to get them to at least sign off on one-second-old data if you can, since that will allow you to take advantage of techniques like the one described here.
Originally published on ASPAlliance.com.
Category - Browse all categories | https://ardalis.com/microcaching-one-second-cache/ | CC-MAIN-2021-25 | refinedweb | 1,030 | 54.52 |
Hi All,
If i copy/paste line for line the below code into IDLE (v2.5 or v2.6) it works perfectly and sends me the email message.
import smtplib fromadd = "arcazy@foo.com" toadd = "meufelt@wfoo.com" msg = "Insert message body here" server = smtplib.SMTP('exchange01.foo.com') server.sendmail(fromadd, toadd, msg) server.quit()
However, If I try to open/run the .py script in IDLE (v2.5 or v2.6)(or with os.system(), I get the following error message:
Traceback (most recent call last): File "\\Wc98466\d\baks\dont_use\devel\email.py", line 1, in <module> import smtplib File "C:\Python25\lib\smtplib.py", line 46, in <module> import email.Utils File "\\Wc98466\d\baks\dont_use\devel\email.py", line 12, in <module> server = smtplib.SMTP('exchange01.foo.com') AttributeError: 'module' object has no attribute 'SMTP'
Any ideas as to why these are running differently if I run from a file vs copy/pasting the code directly into IDLE as I am totally at a loss?
Thanks,
R_ | https://www.daniweb.com/programming/software-development/threads/389106/script-wont-run-in-idle-but-individual-lines-run-fine | CC-MAIN-2017-26 | refinedweb | 172 | 63.66 |
I need to know how to generate a random number with different digits, like 1357 or 123456789
Thank you
Yes it is homework.
All i need to do is to generate a random number with 4 different digits from 0-9(the first one is not 0).
That's all
random.randint()
metulburr wrote:Yes it is homework.
All i need to do is to generate a random number with 4 different digits from 0-9(the first one is not 0).
That's all
and the code you have already tried is where?
try:
- Code: Select all
random.randint()
lehongnhat314 wrote:I need the number to be 1234 or 1357, not 1122. It's like the number has to be different digits
nah = 0
for i in range(len(num)):
for j in range(i+1, len(num)):
if num[i]==num[j]:
nah = 1
import random # You will need the randint function from the random module
result = [] # Init your result list
first_number = str(random.randint(1, 9)) # Guarantee the first number can't be zero. Make it a string so you can join them later.
result.append(first_number) # Append it into the currently empty result list.
while len(result) < 4: # Loop until the length of result is 4 digits
next_number = ???????
>>> list_of_stuff = [1, 5, 'spam', 'Ni', 42]
>>> 5 in list_of_stuff
True
>>> 99 in list_of_stuff
False
>>> 'spam' in list_of_stuff
True
random.sample("1234567890",desired_length)
Return to General Coding Help
Users browsing this forum: No registered users and 9 guests | http://www.python-forum.org/viewtopic.php?f=6&t=8585 | CC-MAIN-2017-09 | refinedweb | 249 | 73.47 |
A flutter based liquid swipe
This repository contains the Liquid Swipe Flutter source code. Liquid swipe is the revealing clipper to bring off amazing liquid like swipe to stacked Container/Widgets and inspired by Cuberto's liquid swipe and IntroViews.
* Get the package from Pub:
flutter packages get ``` * Import it in your file
import 'package:liquid_swipe/liquid_swipe.dart';
Liquid Swipejust requires the list of
Widgets like Container. Just to provide flexibity to the developer to design its own UI through it.
dart final pages = [ Container(...), Container(...), Container(...), ];
Now just pass these pages to LiquidSwipe widget.
dart @override Widget build(BuildContext context) { return new MaterialApp( home: Builder( builder: (context) => LiquidSwipe( pages: pages )), ); } ;)
Please download apk from Releases or Assets folder
Please Refer to API documentation for more details.
| Property | Type | Description | Default Value | |-|:-:|-|:-:| | pages |
List| Set Pages/Views/Containers. See complete example for usage. | @required value | | fullTransitionValue |
double| Handle swipe sensitivity through it. Lower the value faster the animation | 400.0 | | initialPage |
int| Set initial page value, wrong position will throw exception. | 0 | | slideIconWidget |
Widget| Icon/Widget you want to display for swipe indication. Remember the curve will be created according to it. | null | | positionSlideIcon |
double| Icon position on vertical axis. Must satisfy this condition
0.0 <= value <= 1.0| 0.8 | | enableLoop |
bool| Whether you want to loop through all those
pages. | true | | liquidController |
LiquidController| Controller to handle some runtime changes. Refer | null | | waveType |
WaveType enum| Type of clipper you want to use. | WaveType.liquidReveal | | onPageChangeCallback |
Callback| Triggered whenever page changes. | null | | currentUpdateTypeCallback |
Callback| Triggered whenever UpdateType changes. Refer | null | | slidePercentCallback |
Callback| Triggered on Swipe animation. Use carefully as its quite frequent on swipe. | null | | ignoreUserGestureWhileAnimating |
bool| If you want to block gestures while swipe is still animating. See #5 | false | | disableUserGesture |
bool| Disable user gesture, always. | false | | enableSideReveal |
bool| Enable/Disable side reveal | false |
A Controller class with some utility fields and methods.
Simple Usage :
Firstly make an Object of LiquidController and initialize it in initState() ```dart LiquidController liquidController;
@override void initState() { super.initState(); liquidController = LiquidController(); } ```
Now simply add it to LiquidSwipe's Constructor
dart
currentPage- Getter to get current Page. Default value is 0.
isUserGestureDisabled- If somehow you want to check if gestures are disabled or not. Default value is false;.
This project is created by Sahdeep Singh but with lots of support and help. See credits.
If you appreciate my work, consider buying me a cup of :coffee: to keep me recharged :metal: + PayPal
Or you can also connect/endorse me on LinkedIn to keep me motivated.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome! | https://xscode.com/iamSahdeep/liquid_swipe_flutter | CC-MAIN-2022-05 | refinedweb | 440 | 51.44 |
In this post we announce what we've been working on for the last couple of months: an alternative command line interface to EOS, aimed to be the foundation for future development tools for the EOS ecosystem. We also announce the creation of a public endpoint for accessing EOS testnet.
Rationale
For those less familiar with EOS codebase,
eosc is the official CLI (Command Line Interface) for EOS. It interacts with the EOS blockchain by connecting to a full node called
eosd, which can run either locally or on a remote server.
When working with EOS smart-contracts, we've found that
eosc has some inconvenient limitations:
- Firstly, it's quite heavyweight in terms of external dependencies, as it's tightly connected to the entire EOS codebase.
- Secondly, it's not ready to be used in Windows environment.
- And finally, it's hard to use
eoscprogrammatically, as it doesn't offer an API.
It could be enough for us to develop a minimal C++ library acting as an EOS API and this way implement all the commands supported by
eosc. However, it was a short step to provide such a library with a command line interface, and thus create a full-blown
eosc replacement, which we've named
teos.
Here are the benefits of using
teos instead of
eosc:
- With
teosyou can do everything available in
eoscand much more, as we've added a richer & more useful command option list.
- Also, as
teosis not dependent on the entire EOS codebase, it can be easily compiled on any platform, including Windows, which is not the case with
eosc.
- And last but not least,
teoshas an underlying library which offers a proper API which you can use to interact programmatically with
eosd(EOS full node).
For obvious reasons everything we do is open source. The source code of
teos is located in this repository.
Note: To make our project fully cross-platform (including Windows), we needed to clone and modify some of the support libraries from the EOS codebase. For more details please refer to this document.
The ultimate goal
Our plans include opening up EOS for smart-contract development on any platform, including Windows. Once we're finished building the underlying API,
teos will be an excellent starting point for creating a truly cross-platform set of development tools, including a smart-contract deployment framework similar to Ethereum's Truffle.
Public endpoint
As
teos is foremost an EOS client, this announcement includes opening up Tokenika's publicly accessible endpoint as a gateway for trying out EOS and the official testnet without the trouble of running your own full node. This endpoint can be accessed no matter if you're going to use
eosc, the official CLI, or
teos, our alternative.
Our endpoint is available at eos-testapi.tokenika.io. It's not only fully featured, general purpose API but also has wallet API enabled for testing purposes - this should never be done on production!
It is all for you to use it, test it, brake it. We want to be aware of all possible flaws before the end of the testing period.
Our endpoint is connected to the official EOS testnet. Here are the details:
- API: eos-testapi.tokenika.io:8888
- P2P: eos-testp2p.tokenika.io:9876 (or p2p-testnet1.eos.io:9876, if you want to use the official one)
For endpoint configuration please refer to this document.
Disclaimer: Use it at your own risk and when you really know what are you doing. Do not store any key material that has or will have any value in future.
Comparison
As it was mentioned above,
teos covers the same functionality as
eosc, but it's more user friendly and offers a wider selection of options.
Let's take a look at the
get block command and the response
eosc gives regarding its options:
./eosc get block --help
ERROR: RequiredError: block Retrieve a full block from the blockchain Usage: ./eosc get block block Positionals: block TEXT The number or ID of the block to retrieve
A similar command in
teos will produce something like this:
./teos get block --help
Retrieve a full block from the blockchain Usage: ./teos get block [block_num | block_id] [Options] Usage: ./teos get block [-j "{"""block_num_or_id""":"""int | string"""}"] [OPTIONS] Options: -n [ --block_num ] arg Block number -i [ --block_id ] arg Block id -h [ --help ] Help screen --wallet-host arg (=localhost) The host where eos-wallet is running --wallet-port arg (=8888) The port where eos-wallet is running -V [ --verbose ] Output verbose messages on error -j [ --json ] arg Json argument -v [ --received ] Print received json -r [ --raw ] Raw print -e [ --example ] Usage example
Now, let's consider using the
get block command in
eosc:
./eosc get block 25
{ }
A similar command in
teos will produce a response which is less verbose by default, thus more readable:
./teos get block 25
## block number: 25 ## timestamp: 2017-11-29T09:50:03 ## ref block prefix: 623236675
But you can make it verbose, if you need:
./teos get block 25 --received
{ " }
Furthermore, you can make it both verbose and unformatted:
./teos get block 25 --received --raw
{"}
Also, you can supply the arguments in json format:
./teos get block --json '{"""block_num_or_id""":"""56"""}'
## block number: 56 ## timestamp: 2017-11-29T10:02:18 ## ref block prefix: 273573026
And finally, for each command you can invoke an example showcasing its usage:
./teos get block --example
// Invoke GetInfo command ptree getInfoJson; GetInfo getInfo(getInfoPostJson); cout << getInfo.toStringRcv() << endl; /* output: { " } */ // Use reference to the last block ptree GetBlockJson; GetBlock_poGetBlockJsont_json.put("block_num_or_id", getInfo.get<int>("last_irreversible_block_num")); // Invoke GetBlock command GetBlock GetBlock(GetBlock_post_json); cout << GetBlock.toStringRcv() << endl; /* output: { " } */
Using
teos as EOS API
In our view, the real value of our efforts is actually the library that's behind
teos. As we mentioned before, the
teoslib library acts as a full-blown API for EOS.
Let's consider a code snippet illustrating its usage:
#include <stdio.h> #include <stdlib.h> #include <iostream> #include <string> #include "teoslib/teos_get_commands.hpp" int main() { using namespace tokenika::teos; teosCommand::host = TEST_HOST; teosCommand::port = TEST_PORT; ptree getInfoJson; // Invoke GetInfo command: GetInfo getInfo(getInfoJson); cout << getInfo.toStringRcv() << endl; if (getInfo.isError()) { return -1; } ptree getBlockJson; // Use reference to the last block: getBlockJson.put("block_num_or_id", getInfo.get<int>("last_irreversible_block_num")); // Invoke GetBlock command: GetBlock getBlock(getBlockJson); cout << getBlock.toStringRcv() << endl; if (getBlock.isError()) { return -1; } return 0; }
Here is the outcome of the above code:
{ " }
List of currently supported commands
At this very initial stage of our project, we haven't ported all the commands available in
eosc. Below is the list of commands
teos supports in this release:
version client get info get block get account get code get table create key wallet create wallet list wallet keys wallet import wallet open wallet lock wallet lock all wallet unlock
Building on Ubuntu 16.04
Dependencies
If you are on Ubuntu, the only thing you need to do is have the EOS codebase complied and installed. Just follow the instructions listed in the official EOS repository.
We don't really need any of the code contained there, however we recommend to make use of the fact that EOS automated build script sorts out all the external dependencies (i.e. Boost, OpenSSL, GMP & Secp256k1) that
teos requires. And, unless you want to use our remote server for this purpose, you will need a locally running EOS full node anyway to play with
teos, so no effort is actually wasted.
What you will also need is
cmake version 3.8 or higher. You can verify it using this command:
cmake --version
If your
cmake is lower than 3.8, run this command to remove it:
sudo apt-get purge cmake
And then run these commands to build it from the source code:
version=3.8 build=0 wget tar -xzvf cmake-$version.$build.tar.gz cd cmake-$version.$build ./bootstrap make -j4 sudo make install cd .. && rm -rf cmake-$version.$build
When the build process is complete, open a new terminal window and make sure
cmake is actually version 3.8:
cmake --version
Cloning the source code
Navigate to a location of your choice and clone teos repository:
git clone
Compilation
Navigate to the
teos/teos folder and create a new folder named
build:
cd teos/teos mkdir build && cd build
Run CMake:
cmake ..
Make sure there are no errors, and then proceed with the actual compilation:
make install
As the result of the compilation, you should be able to find those two files in the
install folder:
lib/libteoslib.ais a static library acting as an API for EOS
bin/teosis the CLI executable making use of the above library
Testing on remote sever
Open a terminal window, navigate to the
install/bin folder and run
teos:
./teos eos-testapi.tokenika.io:8888 get info
The above command will connect to EOS full node running on one of our testnet servers.
Testing on localhost
If you have complied the entire EOS codebase and have
eosd running on your local machine, you can also test
teos locally:
./teos localhost get info
Or just:
./teos get info
Building on Windows 10
Prerequisites
We assume that your computer is 64bit.
We also assume that you have Git 2.15.1, CMake 3.10.1 and MS Visual Studio 2017 installed. If not, please follow the instructions available in the above links.
As far as Visual Studio is concerned, you will only need the most basic module called Universal Windows Platform Development (and from optional tools C++ Universal Windows Platform Tools).
Dependencies
For most of its functionality
teos is only dependent on Boost. However, because it also needs to able to generate private keys, additionally
teos is dependent on OpenSSL, GMP and Secp256k1, as specified in EOS documentation.
On Windows the main difficulty is to have all those dependencies as Windows-compiled libraries. Advanced Windows users might want to build everything from source files (it's certainly doable) and ultimately we will aim for that. However, at this stage we recommend using pre-compiled binaries:
- Boost version 1.64 (not higher as it might be incompatible with CMake)
- Windows binaries are available here (for a 64bit machine select
boost_1_64_0-msvc-14.1-64.exe).
- Define an environment variable
BOOST_INCLUDEDIRpointing to the location you've chosen to store the Boost libraries (e.g.
C:\Local\boost_1_64_0).
- Define an environment variable
BOOST_LIBRARYDIRpointing to the
lib64-msvc-14.1folder inside the location you've chosen to store the Boost libraries (e.g.
C:\Local\boost_1_64_0\lib64-msvc-14.1).
- OpenSSL version 1.1.0
- Windows binaries are available here (for a 64bit machine select
Win64OpenSSL-1_1_0g.exe).
- Run the installer and when prompted choose to copy the DLLs to the
bindirectory.
- Define an environment variable
OPENSSL_ROOT_DIRpointing to the location you've chosen to store the OpenSLL libraries (e.g.
C:\Local\OpenSSL-Win64).
- GMP version 4.1 (Please note that MPIR may be considered as a good Windows-ready alternative to GMP, as described here. In a future release of
teoswe will probably switch to MPIR, as it seems to be better suited for Windows).
- Window binaries are available here (choose static ones for MinGW, i.e.
gmp-static-mingw-4.1.tar.gz). You might need 7-Zip to extract them.
- Define an environment variable
GMP_DIRpointing to the location you've chosen to store the GMP libraries (e.g.
C:\Local\gmp-static-mingw-4.1).
- Inside
GMP_DIRnavigate to the
libfolder and rename
libgmp.ato
gmp.libto match Windows conventions.
- Secp256k1 - as there are no pre-compiled binaries available what you'll need to do is cross-compilation between Linux and Windows. This will be described in the next section. But first do these preliminary steps:
- Create a location of your choice where Secp256k1 libraries will be stored. In our case it's
C:\Local\secp256k1.
- Define an environment variable
SECP256K1_DIRpointing to the above directory.
Secp256k1 cross-compilation
Secp256k1 is not available directly on Windows, so the only way to go is apply cross-compilation between Linux and Windows. For this purpose, you'll need access to a Linux environment. In our view, the easiest option is using Windows Subsystem for Linux, and the rest of this section is based on this choice.
If you haven't already done so, enable Windows Subsystem for Linux on your Windows machine, as described in this guide.
While on Windows, run Ubuntu bash and start with making sure you are running Ubuntu 16.04:
lsb_release -a
And before you continue, update & upgrade Ubuntu:
sudo apt update sudo apt upgrade
Make sure that you have these tools installed:
sudo apt install autoconf sudo apt install make sudo apt install libtool sudo apt install mingw-w64
Get a copy of the
secp256k1 repository and navigate to its folder:
git clone cd secp256k1-zkp
Define an environment variable called
installDir. Please note that its value needs to match the Windows destination for your Secp256k1 libraries (in our case
C:\Local\secp256k1), as described in the previous section. So probably you'll need to apply a different path than the one used below, unless you've chosen the same location as we did.
export installDir=/mnt/c/Local/secp256k1/
And now you're ready to build and install Secp256k1:
./autogen.sh ./configure --host=x86_64-w64-mingw32 --prefix=${installDir} make make install
Rename the
libsecp256k1 file to match Windows conventions:
mv ${installDir}/lib/libsecp256k1.a ${installDir}/lib/secp256k1.lib
Copy the
libgcc library from Ubuntu to Windows:
cp /usr/lib/gcc/x86_64-w64-mingw32/5.3-posix/libgcc.a ${installDir}/lib/gcc.lib
And finally, copy the executable
tests which will be used for testing:
cp tests.exe ${installDir}/tests.exe
Before you exit Ubuntu bash, you might want to clean the workspace:
cd .. && rm -rf secp256k1-zkp
Open PowerShell, navigate to
SECP256K1_DIR (in our case it's
C:\Local\secp256k1) and run
tests.exe to make sure Secp256k1 works properly on Windows:
cd C:\Local\secp256k1 ./tests.exe
Cloning the source code
Open Visual Studio 2017 Developer Command Prompt, navigate to a location of your choice and clone teos repository:
git clone
Compilation
Still using Visual Studio 2017 Developer Command Prompt, navigate to the
teos\teos folder and run the following commands:
cd teos\teos mkdir build && cd build cmake -G "Visual Studio 15 2017 Win64" .. msbuild teos.sln msbuild INSTALL.vcxproj
Testing on remote sever
Open Power Shell and navigate to the location of your teos repository, and then inside the repository navigate to the
teos\install\bin folder:
cd teos\install\bin
And now you should be able to run
teos and access EOS full node running on one of our servers:
./teos eos-testapi.tokenika.io:8888 get info
Conclusion
We dare to hope that
teos could become an interesting alternative to the original
eosc CLI, and maybe one day be included as part of EOS codebase. To our knowledge this is the first fully cross-platform EOS client, and also a good foundation for an EOS API.
In our subsequent release, we're going to cover the entire EOS API, so you'll be able to compile and deploy an EOS smart-contract on the testnet via our full node, and do it from any operating system, including Windows.
I have added teos to awesome-eos.
Aiming for Truffle like tooling will add a lot of value to the ecosystem, by saving thousands of hours of engineering time. I'll be keeping an eye on this one, kudos so far :)
I am guessing the instructions for Ubuntu are valid for any Linux as well as MacOS. You'll read me here complaining otherwise ;)
Great work and thanks for sharing guys.
Wow, thats a good step on a way to EOS decentralization.
i really like your post and i enjoy it very with all post 👍
Resteemed.
Wooow Great stuff :)
This is very good for me, because I work on Windows. You are doing a great work. Thanks.
I love new toys! Thank you!
incredible work! bravo... 👍🏼
I like your work @tokenika and I'll give teos a try in my future dev. Thanks! | https://steemit.com/eos/@tokenika/introducing-teos-an-alternative-eos-command-line-interface-and-publicly-available-endpoint | CC-MAIN-2019-04 | refinedweb | 2,682 | 62.38 |
Edit
Then navigate to
Using the
Simple HTTP web serverSimple HTTP web server
ConceptsConcepts
- Use Deno's integrated HTTP server to run your own web server.
OverviewOverview
With just a few lines of code you can run your own HTTP web server with control over the response status, request headers and more.
ℹ️ The native HTTP server is currently unstable, meaning the API is not finalized and may change in breaking ways in future version of Deno. To have the APIs discussed here available, you must run Deno with the
--unstableflag.
Sample web serverSample web server
In this example, the user-agent of the client is returned to the client:
webserver.ts:
// Start listening on port 8080 of localhost. const server = Deno.listen({ port: 8080 }); console.log(`HTTP webserver running. Access it at:`); // Connections to the server will be yielded up as an async iterable. for await (const conn of server) { // In order to not be blocking, we need to handle each connection individually // in its own async function. (async () => { // This "upgrades" a network connection into an HTTP connection. const httpConn = Deno.serveHttp(conn); // Each request sent over the HTTP connection will be yielded as an async // iterator from the HTTP connection. for await (const requestEvent of httpConn) { // The native HTTP server uses the web standard `Request` and `Response` // objects. const body = `Your user-agent is:\n\n${requestEvent.request.headers.get( "user-agent", ) ?? "Unknown"}`; // The requestEvent's `.respondWith()` method is how we send the response // back to the client. requestEvent.respondWith( new Response(body, { status: 200, }), ); } })(); }
Then run this with:
deno run --allow-net --unstable webserver.ts
Then navigate to in a browser.
Using the
std/http library
If you do not want to use the unstable APIs, you can still use the standard library's HTTP server:
webserver.ts:
import { serve } from ""; const server = serve({ port: 8080 }); console.log(`HTTP webserver running. Access it at:`); for await (const request of server) { let bodyContent = "Your user-agent is:\n\n"; bodyContent += request.headers.get("user-agent") || "Unknown"; request.respond({ status: 200, body: bodyContent }); }
Then run this with:
deno run --allow-net webserver.ts | https://deno.land/manual@v1.9.2/examples/http_server | CC-MAIN-2022-40 | refinedweb | 354 | 59.09 |
Pardon my ignorance but how much work would be involved in getting a makefile created with the introjucer to build a project on OSX? I usually run xcodebuild from the command line but the builds take forever. I’m wondering if a simple makefile would be faster. I know it’s still going to use the same compiler but I would like to at least see if I can speed up build times at all, even a small decrease would be most welcome.
Linux makefile on OSX?
I would support this as well. The memory usage of Xcode is very high, especially during the compilation of the amalgamated files.
Ideally, JUCE could be built as a number of .dylib and .a binaries corresponding to each JUCE module. The libraries could then be linked to whatever application you’re building instead of be compiled each time. Boost and SFML do this, and the result is very convenient. This would free us from building on bloated IDEs.
Yes… that’s why I ditched them a long time ago. Xcode can build the current version in just a few seconds thanks to the module structure. Attempting to use libs would be slower, less flexible, and far more complicated to manage.
What do you mean by “slower”? Surely, compiling + linking can’t be any faster than just linking. Or do you mean slower to set up builds? I would say that in an environment with a good package manager (Linux, Mac OS X with Fink or Homebrew, and MAYBE Windows with Chocolatey or something) would be much easier to manage cleaner cross platform build systems. And since code speaks 1000 [size=50]64-bit[/size] words…
[code]# brew install juce
g++ -ljuce -L/usr/local/include …
[/code]
MyProcessor.cpp
#include <juce/audio_processors/audio_processors.hpp> #include ...
However, without a universal method of easily distributing binaries to different platforms, I would agree that the build for projects using JUCE would be more complex. Unless boilerplate code with flexible build scripts would be provided to accommodate the change.
To use the modules involves popping a few cpp files into your project and building. No messing about with package managers, or makefiles, or different projects on different OSes, or any of that tedious crap. To build the entire set of juce modules on my laptop takes about 10-20 seconds, and obviously that’s only necessary when you’re doing a clean build.
So by going to the enormous hassle of building a dozen separate libraries on 2-3 different OSes using different build tools, and maintaining different versions of those libraries for all your different apps (assuming your apps use different configs) will end up saving you a few seconds each time you do a clean rebuild.
There’s just no contest IMHO. I’ve seen and used a lot of build systems over the years, and come to the conclusion that the best build system is NO build system at all, just use the raw cpp files. If I could make the library headers-only, I would!
As much as I enjoy the banter back and forth between users, I’m still at a lose as to whether it would be relatively straightforward to get a Juce linux make file working on OSX? I know very little of OSX, does it use similar library/linking conventions to Linux, or are they quite different?
Re: linux makefiles, don’t waste your time. Even if it was straightforward (and it probably isn’t), there’s no reason to think it’d be any safter than letting xcode run the build. Either way, the same compiler would get called a bunch of times, and that’s what takes the time.
Ways to speed up your build: Use modules. Use LLVM. Restructure your files.
I said it already but it’s worth repeating.
The “unity build” source file organization style (i.e. how JUCE modules are structured) and abandonment of static libraries, is a significant step forward in the evolution of coding style. Doing away with setting preprocessor options (for static libs) in the IDE / Makefile settings is a relief.
Not having to make individual header files stand alone (#include this, #include that), and having all of the #include dependencies listed exactly once (in the module source and/or header) is great. Being able to see at a glance what #include dependencies a module has is very useful.
Plus, the build times are much faster. Especially if your project has a number of modules equal to or less than the number of cores on your system.
Why people want to use CMake, M4, autoconf, autotools, or any of the myriad of bulky, hard to use systems for building is beyond me.
I’d also note that whenever I expose the mainstream C or C++ community to these ideas they are always met with ridicule. Fuck them. | https://forum.juce.com/t/linux-makefile-on-osx/9565 | CC-MAIN-2018-34 | refinedweb | 816 | 71.55 |
Chapter 7
Basic Tools with Maya Commands
Author
Adam Mechtley
Project
Tool GUI Template, Polygon Primitives tool, Pose Manager tool
Example Files
optwin.py
polywin.py
posemgr.py
posemgr1.ma
posemgr2.ma
Synopsis
This chapter introduces the basics of creating a Maya GUI window. Readers will use the cmds module to create a base window class with a menu and buttons that they can extend for their own tools. We introduce additional GUI commands available in the cmds module to extend this base class and implement radio buttons, input fields, and color pickers in a simple polygon creation tool. Finally, we cover some advanced techniques for tool development including code organization, data serialization with the pickle module, and working with files in a Maya GUI. The final code example is a basic tool that can copy, paste, save, and load poses on characters.
Resources
functools Documentation
pickle Documentation
Reading and Writing Files
Other Notes
None
Errata for the First Edition
In the optwin.py module (AR_OptionsWindow class), the call to tabLayout should have set the childResizable flag. The file available for download reflects this change.
Hello,
I am trying to complete the exercise on “Using functools Module” but I keep getting an error that reads:
#Error: keyword cannot be an expression
Any help here would be appreciated. I know its a small issue but it will bother me until I get it:)
Thanks for the wonderful book.
Michael
Hi Michael! Can you be a little more specific about precisely where you get the problem, what line, what your code looks like, etc.? I tried the example in the book again and get no problems.
Hi, I actually posted a thread on CG talk with several issues I’m having in chapter 7.
Thought to post it here just aswell if there could be the slightest chance you’d answer 😉
Thanks, and the book is great! Took me a while to get started for real though.
Hi
I want to start of by congratulating the authors to a fantastic book! I have read it front to back and feel that I master a lot of python programming by now.
I am currently working on some simulations in Maya for which I need some graphical interfaces. By using your optwin class I was able to extend it to my own class and build a custom window, but now I have the problem that I want to open a second window when the first one is closed. And even though I put an identic call to a new window it only pops open for the fraction of a second before it is closed by it self. Any idea what is causing this?
I call the window using:
win = cellWin.AR_CellOptionsWindow.showUI(self)
and then one more time at another place in my script.
Further more, I want to start a loop when the window is closed where I basically iterate over the full timescale of the scen like
for t in range(tmax):
currentTime(t)
# do something
but even though this loop is called after the close button on the window is pressed, the loop starts before the window is closed and remains open until the end of the loop. Any clue why this is happening?
Very grateful for any advice
Best Regards
Emil Ljungberg
Lund University, Sweden
Hi,
Can you explain a little more about why we need to use
mel.eval(`python(“import maya.cmds”);`)
at the bottom of page 203? I just don’t get it. Why couldn’t we just do
import maya.cmds
?
thanks!
Hi John! As mentioned in the text, passing a string command as a callback executes the statements in the __main__ module. If you want to make sure maya.cmds exists in __main__, it needs to be imported from within __main__. You can either do that by including the import statement in your callback, or by using MEL’s python command to import it in __main__ in one place. If you were just executing this code from the Script Editor then you’d be fine, but the assumption here is that you’re working on this code in a module of your own.
I am trying to follow from Extending GUI Classes, but when I execute AR_PolyOptionsWindow lines i get this:
# Error: global name ‘cmds’ is not defined
# Traceback (most recent call last):
# File “”, line 8, in
# File “D:\MayaPython\MayWorkspace\HelloPyDev\optwin.py”, line 7, in showUI
# win.create()
# File “D:\MayaPython\MayWorkspace\HelloPyDev\optwin.py”, line 24, in create
# if cmds.window(self.window, exists=True):
# NameError: global name ‘cmds’ is not defined #
Why is that?
Nevermind
Just a very stupid move, great book! | http://www.maya-python.com/chapter-7/ | CC-MAIN-2020-24 | refinedweb | 781 | 70.43 |
How to make a colorful spiral of turtles with turtle stamp.
How to make a spiral of turtles
So this is pretty simple, but you'll need to know python basics first.
If you don't know the language check out this website:
Heres a picture of the final result:
1.
First create a repl with python with turtle.
Then import the turtle and random modules.
import turtle import random
2.
Make your turtle a turtle shape.
turtle.shape('turtle')
Lift your turtle so it doesn't draw one straight line.
turtle.penup()
3.
Create a variable for the distance your turtle moves(paces).
Twenty paces is the minimum but you can do more.
paces = 20
4.
Start a loop to repeat the stamping code.
You can repeat any number of times.
for i in range(50):
Assign a random function to a variable called 'random_red' to pick a random red value.
Assign a variable for blue and green as well.
random_red = random.randint(0, 225) random_blue = random.randint(0, 255) random_green = random.randint(0, 225)
5.
Set the turtle color as the randomly chosen rgb (red, green, and blue) values.
turtle.color(random_red, random_blue, and random_green)
Now stamp your turtle.
turtle.stamp()
6.
Add more paces
Once again, 3 is the minimum but, you can do more.
paces += 3
Move forward by the new number of paces.
stamp.forward(paces)
Finally, slightly turn your turtle right as it moves to start spiraling.
Please put the turn as 25, if you put another number it might not become a spiral.
stamp.right(25)
Finish
Thats it, you have made a spiral of turtles.
I hope you enjoyed this, if you have any advice please comment.
also if you use a # in markdown you need to put a whitespace after the # and before the text if thats what you where trying to achieve
also the
finish at the end needs a whitespace as well if you where doing the big text thingy
Awesome one keep it up | https://replit.com/talk/learn/How-to-make-a-colorful-spiral-of-turtles-with-turtle-stamp/50641?order=new | CC-MAIN-2022-27 | refinedweb | 336 | 76.52 |
public class test { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub exByte(); } public static void exByte(){ double d=3.4444d; d=34444-04;//exponential notation System.out.println("d is"
why above gives compile error.
below why giving false.
Open in new window
float f=3.4444f;
d=34444-04;//subtraction notation
System.out.println("d is"+d);
System.out.println("f is"+f);
System.out.println("d==f is"+ (d==f));
d=34444e-04;//exponential notation
System.out.format("d is %.16f\n",d);
System.out.format("f is %.16f\n",f);
System.out.println("d==f is"+ (d==f));
d is34440.0
f is3.4444
d==f isfalse
d is 3.4444000000000000
f is 3.4444000720977783
d==f isfalse
d is34440.0
d is3.4444E8
Save hours in development time and avoid common mistakes by learning the best practices to use for JavaScript.
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
exByte();
}
public static void exByte(){
float f=3.4444f;
double d=3.4444d;
d=34444-04;//exponential notation
System.out.println("d is"+d);
System.out.println("d==f is"+ d==f);
System.out.println("d==f is"+ f==d);
}
}
both of above bolded lines giving error incomatible types even high precision double assigned to low precision float or vice versa.
please advise
Those lines should have been
System.out.println("d==f is"+ (d==f));
System.out.println("d==f is"+ (f==d));
what it mean by subtraction notation
why it gave output
d is34440.0
Open in new window
above gives
Open in new window
how it exponential operator different from addition or subtraction. please advise
Exponential (a.k.a. scientific) notation for float or double literals can be expressed using E or e
Open in new window
i got output as
Open in new window
means like below right
d3=34444*(10 to the power of -4)
which is moving the decimal to left side 4 digits as below
d3 is 3.4444000000000000
i wonder why below line giving compilation error
//float f3 = 34444e-04;//exponential notation
please advise
a double approximates 3.4444 as
3.444399999999999906208358
a float approximates 3.4444 as
3.4444000720977783203125
you can explicitly convert
float f3 = (float)34444e-04;
or start with a float
float f3 = 34444e-04f;
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial | https://www.experts-exchange.com/questions/28632970/double-exponentioal-notation.html | CC-MAIN-2018-47 | refinedweb | 426 | 53.98 |
Error in sort words dictionary by using AVL treeHelp me fix this error.
[u][b]file .h[/b][/u]
[code]#ifndef BST_H
#define BST_H
#include <iostre...
Having problems with the problem sorted in the order in the dictionaryGreetings!I have a problem with the problem sorted in dictionary order, hope you help!
[code]#inclu...
Difficult to write a method that returns the parent NodeHi everyone, I have written a small program on "binary search tree".But I'm having difficulty in wri...
How to fix this error ( binary search)Thanks for helping. Have a nice day to you! Chervil
How to fix this error ( binary search)Hello all pro. I write this program by visual studio C++ 2010
[code]
#ifndef BinarySearch_H
#defi...
This user does not accept Private Messages | http://www.cplusplus.com/user/viet10932/ | CC-MAIN-2016-07 | refinedweb | 129 | 69.07 |
Add hello apps to uClinux-dist (recommended)
Please follow the doc to add your user apps,
nios2-linux/uClinux-dist/Documentation/Adding-User-Apps-HOWTO
But change foo to hello.
cd nios2-linux/uClinux-dist
<p>mkdir user/hello
</p>
Edit user/Makefile,
add a line in the sorting order.
dir_$(CONFIG_USER_HELLO_HELLO) += hello
Edit user/Kconfig
after menu "Miscellaneous Applications".
config USER_HELLO_HELLO
bool "hello"
This program prints hello world.
Create user/hello/Makefile
EXEC = hello
OBJS = hello.o
all: $(EXEC)
$(EXEC): $(OBJS)
$(CC) $(LDFLAGS) -o $@ $(OBJS) $(LDLIBS)
romfs:
$(ROMFSINST) /bin/$(EXEC)
clean:
-rm -f $(EXEC) *.elf *.gdb *.o
Create user/hello/hello.c.
#include <stdio.h>
int main(void)
{
printf("hello world\n");
}
Then make menuconfig and select [*] hello. Now you can make. Boot nios2 and run "hello".
To compile a simple program for uClinux, just add -elf2flt to link flag
Create a file, hello.c
nios2-linux-uclibc-gcc hello.c -o hello -elf2flt
The compiled object format is FLAT.
You may check it with,
nios2-linux-uclibc-flthdr hello
hello
Magic: bFLT
Rev: 4
Build Date: Mon Jun 5 21:49:44 2006
Entry: 0x40
Data Start: 0x4a8c
Data End: 0x5c48
BSS End: 0x7ca8
Stack Size: 0x1000
Reloc Start: 0x5c48
Reloc Count: 0x11e
Flags: 0x1 ( Load-to-Ram )
Then copy hello to the romfs/bin dir. Rebuild the kernel image for initramfs.
cp hello ~/nios2-linux/uClinux-dist/romfs/bincd ~/nios2-linux/uClinux-dist
make
Boot nios2 uClinux, and run
hello
The default stack size of application is 4KB, you can change it with -elf2flt="-s <new stack size>" option, eg, nios2-linux-uclibc-gcc hello.c -o hello -elf2flt="-s 16000"
will have stack size as 16KiB. If using pthreads, only the main thread will have this 16KiB size. Subsequent threads will have their stack at default size, and you will want to change that size at thread creation. Otherwise, you're in for some stack-blowing fun, which hard to debug.
The default include dir search path is /opt/nios2/include (or staging_dir/include if you use buildroot)
The default library search path is /opt/nios2/lib (or staging_dir/lib if you use buildroot)
The default apps library is uClibc's libc, so you don't need -lc .
If you use math, you need -lm .
If you use pthread, you need -lpthread .
If you use crypt, you need -lcrypt .You will need those include headers, too.
The order of libraries is important, the linker will search only one pass by default.
example apps to use " button pio.
We built the uClibc in the toolchain. The headers and libs of uClibc will be included automatically. So you don't need "-I" and "-L" for uClibc.
When you need other lib, eg libssl, the uClinux-dist/staging directory will be created for libraries installation. This is NOT the staging_dir when we build the toochain.
Please look at the Makefile of uClinux-dist/lib/libssl. We defined "INSTALL_PREFIX=$(STAGEDIR)".
This is the destination of library installation. We would install all libraries there.
The header files of libraries will be installed in nios2-linux/uClinux-dist/staging/usr/include .
The libraries objects will be installed in nios2-linux/uClinux-dist/staging/usr/lib .
If you want to compile a program outside uClinux-dist, you will need to include the header and lib paths. Like this,nios2-linux-uclibc-gcc abc.c -o abc -elf2flt -I/home/hippo/nios2-linux/uClinux-dist/staging/usr/include -L/home/hippo/nios2-linux/uClinux-dist/staging/usr/lib ....
If you add apps to uClinux-dist, these rules are already defined in the CFLAGS and LDFLAGS in the file uClinux-dist/vendors/Altera/nios2/config.arch .
Sometimes, the application executable will be installed in nios2-linux/uClinux-dist/staging/usr/bin. There will be romfs target in apps Makefile to copy the executable to uClinux-dist/romfs/bin or usr/bin.
Please check info of "make". `$(CC) $(LDFLAGS) N.o $(LOADLIBES) $(LDLIBS)'.
Note, you are doing in user space, with uclibc and included from /opt/nios2/include. It is not kernel space. You can not use interrupt. You can not use nios2 HAL, either.
To do decent I/O programming you should either create an appropriate Kernel driver, map the I/O addresses into your application with portable means like mmap() (see Accessing hardware registers from user space programs ) or use uio.
If you really want to work around the ways of the OS and create non-portable code, you should consider the following instructions (while of course the Build System providesdecent I/O access functions for Kernel drivers out of the box).
You should know about the cache in Nios II. The first 2GB of address space is cached. The second 2GB is non-cached. These are not two seperate memory spaces or anything so there is a total of 2GB of address space (mirrored memory). This only applies for Nios II Full version with Data Cache. Nios II Standard version is uncached, so there should be no problems.
In other words address 0x00000000 (cachable) maps to address 0x8000000 (non-cachable)
" " " " " " 0x00000001 " " " " " " " " 0x8000001 (and so on .....)
So use the first 2GB for non-peripheral access (local memory), and the second 2GB for peripherals
You can define memory pointer access, and you can make it uncached by setting address bit 31.
eg, 0x00004000 -> 0x80004000
Or use port address defined in nios2_system.h, eg na_button_pio.
#include </home/hippo/uClinux-dist/linux-2.6.x/include/nios2_system.h>
As an alternative, you can use these two defines (which are always uncached), they are similar to the Nios2-IDE functions :
#define IORD(address,offset) (*(volatile unsigned *)(((address)|0x80000000)+4*(offset)))
#define IOWR(address,offset,value) (*(volatile unsigned *)(((address)|0x80000000)+4*(offset)))=(value) | https://community.intel.com/t5/FPGA-Wiki/Compile-Hello/ta-p/735343 | CC-MAIN-2021-10 | refinedweb | 954 | 58.89 |
Determine information about DatabaseEngines Foundation
At runtime of a program it is sometimes necessary to determine information about loaded DBEs, work areas, or individual fields in work areas. Xbase++ provides the four functions listed in the following table for this:
For a deeper understanding of these functions, knowledge of the internal mechanisms used for opening files with USE or DbUseArea() is required. These mechanisms are not immediately recognizable at the language level. A file can only be opened when at least one DBE is loaded that provides the DATA component for the requested file format. Only after such a component is available can a file be opened in a work area, otherwise a runtime error occurs. Whether or not a DBE is currently available can be determined at runtime using the function DbeList(). DbeList() returns a two column array: the first column contains the names of DBEs and the second column contains logical values representing the "hidden-flag" (see previous section).
Opening a file using the current DBE creates an instance of the DBE which is called a "database object" (DBO). The DBO makes use of all the characteristics of the current DBE and it is actually the database object that opens and manages files. A DBO represents the work area where the files are opened.
It is helpful to distinguish between the DatabaseEngine, which provides the functionality that allows files to be opened in a work area and the database object which exists only while the file is open in the work area. The database object is created by the DatabaseEngine when the file is opened. It manages the file and is automatically discarded when the file is closed.
The function DbeInfo() provides information about the current DBE. Therefore, it can only be executed when a DBE is loaded. It is similar to the functions DbInfo() and FieldInfo() that can only be called when a file is open in the current work area. Otherwise a runtime error occurs.
As well as returning information, DbeInfo() also allows a DBE to be configured in specific ways. Several settings of a DBE are changeable and can be redefined using DbeInfo(). As an example, the default extension for files can be changed:
#include "Dmlb.ch"
#include "DbfDbe.ch"
DbeLoad( "DBFDBE", .T.)
DbeLoad( "NTXDBE", .T.) // create the compound DBE
DbeBuild( "DBFNTX", "DBFDBE", "NTXDBE" ) // DBFNTX
// default extension DBF->FBD
// for database files
DbeInfo( COMPONENT_DATA , DBE_EXTENSION, "FBD" )
// default extension NTX->XTN
// for index files
DbeInfo( COMPONENT_ORDER, DBE_EXTENSION, "XTN" )
DbCreate( "Temp", { { "LName", "C", 20, 0 }, ;
{ "FName", "C", 20, 0 }, } )
USE Temp // create database and
INDEX ON Field->LName TO TempA // index files
INDEX ON Field->FName TO TempB
CLOSE Temp
// display file names. Result:
AEval( Directory( "Temp*.*" ), ; // Temp.FBD
{|a| QOut(a[1]) } ) // TempA.XTN
// TempB.XTN
In this example, the default extensions for two different kinds of files are redefined using DbeInfo(). This affects the files managed by the DATA component of the DBFNTX DBE (the DBF file is created as a FBD file) and the files which are managed by the ORDER component (the NTX files are created as XTN files). All this happens in just two lines of the example:
DbeInfo( COMPONENT_DATA , DBE_EXTENSION, "FBD" )
DbeInfo( COMPONENT_ORDER, DBE_EXTENSION, "XTN" )
A special aspect of DbeInfo() is shown: if the function is called with parameters, the first parameter must always be a #define constant from the file DMLB.CH specifying one of the four components that can be contained in a DBE. Also, the second parameter must be a #define constant. There some constants that are universally valid and can be used with every DBE and other constants that may be used only with a specific DBE. The difference can be recognized by the prefix of the #define constant. The prefix DBE_ identifies universally valid constants and the prefix DBFDBE_ identifies constants which are only valid for the DBFDBE (more precisely, for the DATA component which manages DBF files). Constants containing the prefix DBFDBE_ are defined in the #include file DBFDBE.CH. The third parameter specifies the value which is to be set for the specific setting (in the example, the file extension) of a DBE.
Not all settings of a DBE are changeable, so the third parameter is only processed by DbeInfo() when the DBE allows the corresponding setting to be changed. The following table gives an overview of the universally valid settings that exist for all DBEs. The only setting which can be changed is the file extension:
Along with these general constants, most DatabaseEngines have specific #define constants that can only be used for the specific DBE (more precisely, for a specific component). In the chapter "Database Engines" DBE specific constants for DbeInfo() are described.
When a file is opened in a work area, a database object (DBO) is created by the current DatabaseEngine to manage the file open in the work area. The DBO represents the work area and the function DbInfo() can read information about the DBO and can change settings of the DBO. DbInfo() requires that a file be open in the corresponding work area.
DbInfo(), as well as DbeInfo(), receives parameters that are constants defined in an #include file. Universally valid constants can be found in the file DMLB.CH and are listed in the next table:
The universally valid #define constants for DbInfo() starts with the prefix DBO_ (for database object). There are also constants which can be used only with DBOs created by a specific DatabaseEngine. An example of constants that can be passed to DbInfo() is given in the following program code:
#include "Dmlb.ch"
#include "DbfDbe.ch"
DbeLoad( "DBFDBE", .T.)
DbeLoad( "NTXDBE", .T.) // create DBFNTX
DbeBuild( "DBFNTX", "DBFDBE", "NTXDBE" ) // compound DBE
USE Customer ALIAS Cust
? DbInfo( DBO_ALIAS ) // result: Cust
? DbInfo( DBO_FILENAME ) // result: C:\DATA\Customer.DBF
// file handles
? DbInfo( DBFDBO_DBFHANDLE ) // result: 8
? DbInfo( DBFDBO_DBTHANDLE ) // result: 9
The example illustrates that the first parameter passed to DbInfo() is a #define constant which is either a universally valid constant (prefix DBO_) or a specific DBE constant. In the case of the DBFDBE, the specific constants for the function DbInfo() start with the prefix DBFDBO_ and are contained in the #include file DBFDBE. (To summarize: constants which contain DBE_ are valid for a DatabaseEngine, and for the function DbeInfo(). Constants which contain DBO_ are valid for database objects and for the function DbInfo()). The constants for DbInfo() that are dependent on the current DatabaseEngine are listed in the chapter "Database Engines".
A DBO is initialized with all the current settings of the DBE when the file is opened. If changes are later made to the DBE using DbeInfo(), all DBOs remain unaffected by the change. This means that all work areas where files are open are not affected by changes made to a database engine. Such changes affect only those work areas where a file is opened after the change is made.
As soon as a file is opened in a work area, field variables (fields) exist within this work area. Similar to Clipper, information about a field can be determined using the functions FieldName() or FieldPos(). Xbase++ also includes the function FieldInfo() to read or change information about an individual field in a work area. The function FieldInfo() behaves in a manner similar to DbeInfo() and DbInfo(), and takes a #define constant as the second parameter. The valid constants for FieldInfo() are listed in the following table:
FieldInfo() provides important pieces of information about fields in the database such as the length and the number of decimal places. Example:
(In this example, it is assumed that the DBFDBE is loaded)
#include "Dmlb.ch"
// create database
DbCreate( "Part", { { "PartNo" , "C", 6, 0 }, ;
{ "Part" , "C", 20, 0 }, ;
{ "Price" , "N", 8, 2 } } )
USE Part
? FieldInfo( 3, FLD_LEN ) // result: 8
? FieldInfo( 3, FLD_DEC ) // result: 2
? FieldInfo( 1, FLD_LEN ) // result: 6
? FieldInfo( 2, FLD_LEN ) // result: 20
The first parameter of the function FieldInfo() is the ordinal position of a field (as returned by FieldPos()) and the second parameter is a #define constant designating what information is being requested. To determine the length of a field or its decimal places, two simple pseudo functions can be defined for translation by the preprocessor into calls to FieldInfo():
#xtranslate FieldLen( <nPos> ) => FieldInfo( <nPos>, FLD_LEN )
#xtranslate FieldDec( <nPos> ) => FieldInfo( <nPos>, FLD_DEC )
The data type of a field is also important information and can be determined by passing the constant FLD_TYPE or FLD_NATIVETYPE. In both cases FieldInfo() returns a character value identifying the data type. Using the two constants, the data type which is available to be manipulated by the appropriate Xbase++ commands and functions can be distinguished from the original data type stored in the database. They are often, but not always identical. For example, at the language level of Xbase++ only a single numeric type exists. When numbers are stored in fields, however, integers and floating point numbers might be treated differently. Xbase++ recognizes the different representations for numbers and other data internally and distinguishes between data types that can exist on the language level and on the database level. Correspondingly, FieldInfo() can read the data type of a field as it exists on the language level (FLD_TYPE) or on the database level (FLD_NATIVETYPE). To determine the data type of a field on the Xbase++ language level, the constant FLD_TYPE is passed to FieldInfo(). The data type is returned by FieldInfo() equivalent to the return values of Valtype() or Type().
The numeric identification of data types uses constants defined in the #include file TYPES.CH. The constants from the following table are available for determining data types using FieldInfo() along with FLD_TYPE_AS_NUMERIC and FLD_NATIVETYPE_AS_NUMERIC: | https://doc.alaska-software.com/content/xppguide_h2_determine_information_about_databaseengines.html | CC-MAIN-2022-05 | refinedweb | 1,598 | 52.9 |
Perlbal::Manual::Plugins - Creating and using plugins
Perlbal 1.78.
How to create and use Perlbal plugins.
Perlbal supports plugins through modules under
Perlbal::Plugin::* that implement a set of functions described further down this document.
Some of these plugins are shipped with Perlbal itself, while others can be found on CPAN (you can also create your own plugin and have it available only locally).
In order to use a plugin you first have to load it; on your Perlbal's configuration file add something like:
Load MyPlugin
This loads plugin
Perlbal::Plugin::MyPlugin.
Each plugin will have its own way of being configured (some don't require any configuration at all), so you'll have to refer to their documentation (or code).
Typically (but not always), a plugin will allow you to set additional parameters to a service; for instance:
LOAD MaxContentLength CREATE SERVICE example SET max_content_length = 100000 SET plugins = MaxContentLength
max_content_length is a parameter of Perlbal::Plugin::MaxContentLength.
If you're worried that two plugins may have the same parameter, of if you simply want to define those variables all in the same spot and thus will be doing it outside of the plugin's context, you can use the more verbose syntax:
SET my_service.my_plugin.my_variable = my_value
Notice that some plugins need to be stated service by service; hence, this line:
SET plugins = MaxContentLength
The
plugins parameter (a list of strings separated by commas or spaces) defines which plugins are acceptable for a service.
If you try to load a plugin and receive the following error message:
ERROR: Can't locate Perlbal/Plugin/MyPlugin.pm in @INC
That means that either the plugin isn't installed or perlbal couldn't find it. (perhaps it is installed in a different version of perl other than the one used to run perlbal?)
A Perlbal plugin consists in a package under the
Perlbal::Plugin namespace that implements a number of functions:
unregister,
load and
unload.
These steps and functions (plus some helper functions you can define or use) are described below.
PLEASE KEEP IN MIND: Perlbal is a single-process, asynchronous web server. You must not do things in plugins which will cause it to block, or no other requests can be served at the same time.
While there are many ways of creating a package, we'd recommend that you use something to do it for you. A good option is Module::Starter.
(note: if you really want to, you can just create a file with your package and use it; by using something like Module::Starter you're making sure that several pitfalls are avoided, lots of basic rules are followed and that your package can easily be made available as a distribution that you can deploy on any machine - or, if you feel so inclined, upload to CPAN - in a simple way)
Let's assume you want to create a plugin that checks requests for a
X-Magic header and, if present, add an header
X-Color to the response when serving a file. Let's assume your plugin will be called
Perlbal::Plugin::ColorOfMagic.
Having installed Module::Starter, here's a command you can run that will create your package for you:
$ module-starter --module=Perlbal::Plugin::ColorOfMagic --author="My name" --email=my@email.address
That should create a file tree that you can get better acquainted with by reading Module::Starter's fine documentation. For this example, the file you really need should now reside in
lib/Perlbal/Plugin/ColorOfMagic.pm.
This file probably starts with something like the following:
package Perlbal::Plugin::ColorOfMagic; use warnings; use strict;
You'll have to add a few functions to this file. These are described below.
(note: upon creating this package, some boilerplate documentation will also be present on the file; you should revise it and even remove bits that don't feel right for your plugin)
register is called when the plugin is being added to a service. This is where you register your plugin's hooks, if required (see Perlbal::Manual::Hooks for the list of existing hooks and further documentation on how they work).
For the sake of our example (
Perlbal::Plugin::ColorOfMagic, see above), what we want to do is register a hook that modifies the response headers; that means we want a
modify_response_headers hook.
Here's what you'd do:
sub register { my ($class, $service) = @_; my $my_hook_code = sub { my Perlbal::ClientHTTPBase $cp = shift; if ( $cp->{req_headers}->header('X-Magic') ) { $cp->{res_headers}->header( 'X-Color', 'Octarine' ); } return 0; }; $service->register_hook('ColorOfMagic','modify_response_headers', $my_hook_code); }
Inside
register, we're calling
register_hook to register our
ColorOfMagic
modify_response_headers hook. Its code, that will run "when we've set all the headers, and are about to serve a file" (see Perlbal::Manual::Hooks), receives a Perlbal::ClientHTTPBase object (you can see what kind of object your hook will receive on Perlbal::Manual::Hooks). We're checking to see if
X-Magic is defined on the request and, if so, we're setting header
X-Color on the response to
Octarine.
Notice that the hook ends with
return 0. This is because returning a true value means that you want to cancel the connection to the backend and send the response to the client yourself.
unregister is called when the plugin is removed from a service. It's a standard good practice to unregister your plugin's hooks here, like so:
sub unregister { my ($class, $service) = @_; $service->unregister_hooks('ColorOfMagic'); return 1; }
You can also use
unregister_hook to unregister one single hook:
$service->unregister_hook('ColorOfMagic', 'modify_response_headers');
load is called when your plugin is loaded (or reloaded).
This is where you should perform your plugin's initialization, which can go from setting up some variables to registering a management command (to register commands see the documentation for
manage_command further down this document).
my $color; sub load { my $class = shift; $color = 'Octarine'; return 1; }
load must always be defined, but if you really don't need it you can have it simply returning a true value:
sub load { return 1; }
unload is called when your plugin is unloaded. This is where you should perform any clean up tasks.
unload must always be defined, but if you really don't need it you can have it simply returning a true value:
sub unload { return 1; }
Don't forget to call
unregister_global_hook if you have registered any (see the documentation for
manage_command further down this document and you'll see what we're talking about).
load is called when the plugin is loaded, while
register is called whenever the plugin is set for a service.
This means that you should use
load for anything that is global, such as registering a global hook, and you should use
register for things that are specific to a service, such as registering service hooks.
dumpconfig is not required.
When managing Perlbal (see Perlbal::Manual::Management) you can send a
dumpconfig command that will result in a configuration dump.
Apart from the global configuration, each plugin that implements a
dumpconfig function will also have that function called.
dumpconfig should return an array of messages to be displayed.
sub dumpconfig { my ($class, $service) = @_; my @messages; push @messages, "COLOROFMAGIC is $color"; return @messages; }
Again,
dumpconfig is not required, so implement it only if it makes sense for your plugin.
Adding a tunable will allow you to set its value within each plugin:
LOAD MyPlugin CREATE SERVICE my_service SET my_new_parameter = 42 SET plugins = MyPlugin ENABLE my_service
add_tunable can be used by plugins that want to add tunables so that the config file can have more options for service settings.
sub load { Perlbal::Service::add_tunable( my_new_parameter => { check_role => '*', check_type => 'int', des => "description of my new parameter", default => 0, }, ); return 1; }
check_role defines for which roles the value can be set (
reverse_proxy,
web_server, etc). A value of
* mean that the value can be set for any role.
The acceptable values for
check_type are
enum,
regexp,
bool,
int,
size,
file,
file_or_none and
directory_or_none. An Unknown check_type error message will be displayed whenever you try to set a value that has an unknown
check_type.
check_type can also contain a code reference that will be used to validate the type.
check_type => sub { my $self = shift; my $val = shift; my $emesg = shift; ... },
This code reference should return a true or false value. If returning false, the contents of
$emesg (which is passed as a reference to the function) will be used as the error message.
Here's a better explanation of the acceptable values for
check_type:
Boolean value. Must be defined and will be checked as a Perl value.
The value needs to be defined and the content must be an existing directory (validated against perl's -d switch).
An array reference containing the acceptable values:
check_type => [enum => ["yellow", "blue", "green"]],
A filename, validated against perl's -f switch.
A filename, validated against perl's -f switch, or the default value.
An integer value, validated against
/^\d+$/.
Regular expression.
The correct form of setting a regexp tunable is by setting it as an array reference containing the type (
regexp), the regular expression and a message that can explain it:
check_type => ["regexp", qr/^\d+\.\d+\.\d+\.\d+:\d+$/, "Expecting IP:port of form a.b.c.d:port."],
A size, validated against
/^(\d+)[bkm]$/.
Perlbal catches unknown configuration commands and tries to match them against hooks in the form of
manage_command.*.
Let's say that you want to set a management command
time that will allow you to see what time it is on the server.
sub load { Perlbal::register_global_hook('manage_command.time', sub { my $time = localtime time; return [ "It is now:", $time ]; }); return 1; }
If you want to display a text message you should return an array reference; each of the values will be printed with a trailing newline character:
time It is now: Wed Dec 1 19:08:58 2010
If you need to parse additional parameters on your hook, you can use
parse and
args on the Perlbal::ManageCommand object that your function will receive:
my $mc = shift; $mc->parse(qr/^time\s+(today|tomorrow)$/, "usage: TIME [today|tomorrow]"); my ($cmd, $choice) = $mc->args;
This would allow you to call your command with an argument that would have to be one of
today or
tomorrow.
register_setter allows you to define parameters that can be set for your plugin, using a syntax such as:
SET my_service.my_plugin.my_variable = my_value
For instance:
SET discworld.colorofmagic.color = 'Orange'
Here's how you'd configure a new setter, by using
register_setter inside
load:
my $color; sub load { $color = 'Octarine'; $svc->register_setter('ColorOfMagic', 'color', sub { my ($out, $what, $val) = @_; return 0 unless $what && $val; $color = $val; $out->("OK") if $out; return 1; }); return 1; }
For plugins that will work with a
selector service, sometimes you'll want to override the
selector itself.
You can do this in
sub register { my ($class, $svc) = @_; $svc->selector(\&my_selector_function);
Don't forget to unregister your function on the way out:
sub unregister { my ($class, $svc) = @_; $svc->selector(undef); return 1; }
Your
selector function receives a Perlbal::ClientHTTPBase object.
my Perlbal::ClientHTTPBase $cb = shift;
Inside your
selector function you can set which service to forward the request to like this:
my $service = Perlbal->service($service_name); $service->adopt_base_client($cb); return 1;
See Perlbal::Plugin::Vhosts or Perlbal::Plugin::Vpaths for examples on how to do this.
The following is a list of known plugins:
Basic access control based on IPs and Netmasks.
Add Headers to Perlbal webserver responses.
Auto-removal of leading directory path components in the URL.
See which backend served the request.
Handle Perlbal requests with a Perl subroutine.
Simple plugin demonstrating how to create an add-on service for Perlbal using the plugin infrastructure.
Add a custom header according to the SSL of a service.
Enable FLV streaming with reverse proxy.
Rename the X-Forwarded-For header in Perlbal.
Makes some requests high priority.
Allows multiple, nesting configuration files.
Support for Content Delivery Networks.
Reject large requests.
Automatic 304 Not Modified responses when clients send a
If-Modified-Since header.
PSGI web server on Perlbal.
Plugin that allows Perlbal to serve palette altered images.
Simple queue length header inclusion plugin.
Plugin to do redirecting in Perlbal land.
Basic Perlbal statistics gatherer.
Session affinity for Perlbal.
Throttle connections from hosts that connect too frequently.
Remove untrusted headers.
Let URL match it in regular expression.
Name-based virtual hosts.
Select by path (selector role only).
Perlbal plugin that can optionally add an X-Forwarded-Port and/or X-Forwarded-Proto header to reverse proxied requests.
Perlbal::Manual::Hooks, Perlbal::Manual::Internals.
There are sample configuration files under conf/; some of these are examples on how to use and configure existing plugins: echoservice.conf for Perlbal::Plugin::EchoService, virtual-hosts.conf for Perlbal::Plugin::VHosts, etc. | http://search.cpan.org/~dormando/Perlbal/lib/Perlbal/Manual/Plugins.pod | CC-MAIN-2014-42 | refinedweb | 2,133 | 51.99 |
Mar.
Input
5
1 9
1 10
3 40
1 1000000000000000000
10000000000000000 1000000000000000000
Output
Case #1: 1
Case #2: 2
Case #3: 3
Case #4: 707106780
Case #5: 49
The Solution
My solution only soved the small input, so I am pasting below the solution from a user called tjhance7. He solved it quite elegantly:
import sys t = int(sys.stdin.readline()) for casen in xrange(t): line = sys.stdin.readline().strip() linea = line.split(" ") r = int(linea[0]) t = int(linea[1]) lo = 0 hi = 10000000000 while hi > lo + 1: mid = (lo + hi) / 2 if 2*r*mid + (2*mid-1)*mid <= t: lo = mid else: hi = mid print "Case #%d: %d" % (casen+1, lo) | https://www.programminglogic.com/google-code-jam-2013-round-1a-problem-a/ | CC-MAIN-2019-13 | refinedweb | 116 | 76.96 |
At the end of April 2002, Nish and I were trying to figure out some of
.NET's clipboard capabilities. When we had finished the conversation we
decided on writing a two part article, Nish doing
part one on the basics then I
would do part two on some more advanced topics. I was going to write it as
soon as the screensaver competition was over, needless to say I forgot....until
now that is.
In this article I hope to cover multiple data formats, and creating your own
custom data formats.
Placing multiple data formats on the clipboard is incredibly easy; once you
know the trick
The first tric...er step is to figure out what data formats you will be using.
Later I'll discuss custom formats so ignore those for now. To help you in
your search the DataFormats class has a list of commonly used
formats stored as static string fields; refer to MSDN for the specifics on each format listed there.
DataFormats
Now that you know the formats you will use, create a instance of a class that
implements IDataObject, the only such class is (appropriately) named DataObject.
IDataObject
DataObject
IDataObject ido = new DataObject();
With the IDataObject interface in hand we can begin to use it. The
purpose of the IDataObject is to "[Provide] a format-independent mechanism for
transferring data" (MSDN, IDataObject interface). Those familiar with COM
will know this object well. It can store one object of each format that you
pass in. Thus if you need to store 3 Text formatted objects, you'll need
to wrap them into a custom format which I'll explain later.
ido.SetData(DataFormats.Text, true, myString);
That line of code adds a new Text formatted object with the value of myString
to the DataObject, and allows it to be converted to other types as well.
This will be a key point that you find in Win32 using the Clipboard. Data
is often in multiple formats to allow it to be used in a multitude of programs.
Typically data will be stored in a custom format, as text, and possibly a bitmap
if it represents something graphical. This allows for pasting into many
applications and into itself in a native format.
Clipboard
Adding multiple, additional, formats is as easy as additional calls to SetData
with differing formats.
SetData
Clipboard.SetDataObject(ido, true);
This line finally places the data on the clipboard where it can be read by
any program. The true argument tells the Clipboard to make the data
available even after the program quits. Later I'll introduce another
important point about the second parameter, which isn't mentioned by MSDN.
true
With the data copied to the Clipboard you can now paste it as you did before;
check for the proper data format then retrieve it if it exists.
IDataObject ido = Clipboard.GetDataObject();
if(ido.GetDataPresent(DataFormats.Text))
{
// Text data is present on the clipboard
textBox1.Text = (string) ido.GetData(DataFormats.Text);
}
A custom data format can be anything; whether a special text formatting such
as HTML or XML or it can be an instance of a class. To use a custom
format, first you register it with Windows. In the demo application you
will find that I register the type with Windows in the static constructor for
the custom type class, CustomData.
static CustomData
{
format = DataFormats.GetFormat(
typeof(CustomData).FullName
);
}
The code typeof(CustomData).FullName merely retreives the full name of the
type; since I needed to choose a unique name I figured that would do.
GetFormat returns an instance of the DataFormats.Format class which I store for
use as a static property to access the Name and ID of the format. The name doesn't
have to be relevant just unique, Windows registers a Format17 which it uses for screenshots.
typeof(CustomData).FullName
GetFormat
DataFormats.Format
Name
ID
The only unique part about creating a custom format that uses a class is that
the class must be serializable, that is it needs to have the Serializable
attribute applied to it.
Serializable
[Serializable()]
public class CustomData
.....
To use it on the clipboard you do so as you would any other format.
IDataObject ido = new DataObject();
CustomData cd = new CustomData(myImage, myString);
ido.SetData(CustomData.Format.Name, false, cd);
Retrieving data from it is done the same as before.
IDataObject ido = Clipboard.GetDataObject();
if( ido.GetDataPresent(CustomData.Format.Name) )
{
// Do something interesting with the data
CustomData cd = (CustomData)
ido.GetData(CustomData.Format.Name);
}
According to MSDN the second parameter tells the Clipboard object whether the
data inside should be persisted when the application exits. BUT that isn't
the full story. What it actually does is delay the serialization of the
data until when the data is actually needed.
To see this in action run the included ClipboardTest2 application, uncheck
"Persist Data"; then proceed to Draw, Copy, and Paste, then Draw and Paste.
You'll see that even though the data was copied only once the new data is shown.
The reason for this lies in that I continue to operate on the same object
that was passed into the IDataObject's SetData method, since we told
the Clipboard object NOT to serialize the data until needed, that is what it did. On top of that it
will reserialize the data everytime the data is requested. Oddly though it
only updates the format specified, none of the auto-converted formats are touched once
the initial serialization of the format occurs.
To demonstrate this open up the ClipboardTest2 application, again
uncheck "Persist Data"; and then Draw, Copy, and Paste. Proceed to Draw and
Paste, then open up your favorite image editing software; MSpaint will
do. Now Paste the bitmap there; do some more Draw and Pastes then Paste
again in another copy of MSPaint. You'll see the same bitmap was pasted
in both MSPaint's.
This seems like a bug, but really it is a bug in my use. Once you place
an object on the clipboard the intended result is that object stays the same;
not to change it like I did. This means the optimal solution is to create
a copy and cache it somewhere until it is needed by the Clipboard object.
If you choose to not persist data to the clipboard, upon exiting the
application you should re-place the data on the clipboard and persist it; so
that the contents are still there for the user to use.
There you have it folks, the Clipboard in all its glory. Hope I didn't
bore you too much. As always leave questions or comments below; and please
For those interested in the Visual Style I used in the screen shots, I got it
from themexp.org;
you'll also need StyleXP or the free update
to uxtheme.dll from the same site.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
Hey Can you help figure out how to Uploaded Word Doc and write the text in ASP.NET2 C#.
I think there is something wrong with the Copy and Clipboard because I get an error from the IDataObject getData when I try to GetData to write the text its not set to an instance of an object.
Object filename = filResume.PostedFile.FileName;
ApplicationClass word = new ApplicationClass();AndRepair = Type.Missing;
Object documentDirection = Type.Missing;
Object noEncodingDialog = Type.Missing;
Object XmlTrans = Type.Missing;
Document doc = word.Documents.Open(ref filename,
ref confirmConversions, ref readOnly, ref addToRecentFiles,
ref passwordDocument, ref passwordTemplate, ref revert,
ref writePasswordDocument, ref writePasswordTemplate,
ref format, ref encoding, ref visible, ref openAndRepair,
ref documentDirection, ref noEncodingDialog, ref XmlTrans);
try
{
// Copy Read Text
doc.ActiveWindow.Selection.WholeStory();
doc.ActiveWindow.Selection.Copy();
IDataObject data = Clipboard.GetDataObject();
//Here is The Error Line: Object reference not set to an instance of an object.
Response.Write(data.GetData(DataFormats.Text).ToString() + "");
//Empty Clipboard
Clipboard.Clear();
}
catch (Exception ex) {
Response.Write(ex + "");
}
<!--Add to Web.config Assembly -->
<add assembly="Microsoft.Office.Interop.Word, Version=11.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"/>
using Microsoft.Office.Interop.Word;
using System.Windows.Forms;
<asp:FileUpload
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/2379/Clipboard-handling-with-NET-Part-II | CC-MAIN-2015-32 | refinedweb | 1,415 | 56.45 |
Using Signals with RobotLegs 2
I'm not a much of a quick study as most of the people who are using Robotlegs, and I tend to get something to work, and usually stay continue using that same method from project to project. Well, I have an opportunity to upgrade an existing project and I thought I would go all the way, using SDK 4.6 and Robotleg 2 with Signals.
I started my "standard" setup and immediately ran into a problem with my Context page. Normally, I would setup my Context (using Robotlegs 1) using the following:
package com.rl { import com.rl.controllers.LoadXMLCommand; import com.rl.controllers.StartUpCommand; import flash.display.DisplayObjectContainer; import org.robotlegs.base.ContextEvent; import org.robotlegs.mvcs.SignalContext; public class FTNContext extends SignalContext { public function FTNContext(contextView:DisplayObjectContainer = null, autoStartup:Boolean = true) { super(contextView, autoStartup); } override public function startup():void { commandMap.mapEvent(ContextEvent.STARTUP, StartUpCommand, ContextEvent, true); commandMap.mapEvent(ContextEvent.STARTUP_COMPLETE, LoadXMLCommand, ContextEvent, true); dispatchEvent(new ContextEvent(ContextEvent.STARTUP)); dispatchEvent(new ContextEvent(ContextEvent.STARTUP_COMPLETE)); } } }
This has been working perfectly and I've have an understanding of what is going on.....for the most part. Anyway, now I've moved to 4.6 and RL2, and when I attempt to using the format I get the error. My error exists here:
public function FTNContext(contextView:DisplayObjectContainer = null, autoStartup:Boolean = true) { super(contextView, autoStartup); ///<--- Invalid Number of parameters, expecting 0 }
On my MXML page, I am bootstrapping the page in my declaration:
<fx:Declarations> <local:FTNContext </fx:Declarations>
I'm at a loss as to why I am having problems. I plan on returning back to RL1, just to get my project moving. Maybe I modified the original RL code to deal with this, but I doubt it. I rarely screw with code that is working.
Can anyone provide a suggestion as to what I am missing, or a way I can correct this problem. Thank you in advance.
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by Shaun Smith on 01 Apr, 2013 07:17 PM
Hi,
RL2 has a very different setup mechanism to RL1. You no longer need to extend the Context class. Instead you instantiate the standard Context and add configuration to it. The readme on this page should get you up and running:
To use signals you should install the SignalCommandMap extension:...
2 Posted by samac1068 on 01 Apr, 2013 07:19 PM
Thanks a lot Shaun. I figured it was something I didn't take into consideration, like reading.
Support Staff 3 Posted by Shaun Smith on 01 Apr, 2013 07:21 PM
Haha, yes things are quite different.. you might need to do a fair bit of reading.
samac1068 closed this discussion on 01 Apr, 2013 07:22 PM. | http://robotlegs.tenderapp.com/discussions/robotlegs-2/1270-using-signals-with-robotlegs-2 | CC-MAIN-2019-13 | refinedweb | 484 | 58.89 |
A Django app for profiling other Django apps below and a live demo is available at, where the tool is actively profiling my website and blog!
Contents
Requirements
Silk has been tested with:
- Django: 1.5, 1.6
- Python: 2.7, 3.3, 3.4
I left out Django <1.5 due to the change in {% url %} syntax. Python 2.6 is missing collections.Counter. Python 3.0 and 3.1 are not available via Travis and also are missing the u'xyz' syntax for unicode. Workarounds can likely be found for all these if there is any demand. Django 1.7 is currently untested.
Features
Request Inspection
The Silk middleware intercepts and stores requests and responses and stores them in the configured database. These requests can then be filtered and inspecting using Silk’s UI through the request overview:
It records things like:
- Time taken
- Num. queries
- Time spent on queries
- Request/Response headers
- Request/Response bodies
and so on.
Further details on each request are also available by clicking the relevant request:
SQL Inspection
Silk also intercepts SQL queries that are generated by each request. We can get a summary on things like the tables involved, number of joins and execution time:
Before diving into the stack trace to figure out where this request is coming from:
Profiling
Silk can also be used to profile random blocks of code/functions. It provides a decorator and a context manager for this purpose.
For example:
@silk_profile(name='View Blog Post') def post(request, post_id): p = Post.objects.get(pk=post_id) return render_to_response('post.html', { 'post': p })
Whenever a blog post is viewed we get an entry within the Silk UI:
Silk profiling not only provides execution time, but also collects SQL queries executed within the block in the same fashion as with requests:
Decorator
The silk decorator can be applied to both functions and methods
@silk_profile(name='View Blog Post') def post(request, post_id): p = Post.objects.get(pk=post_id) return render_to_response('post.html', { 'post': p }) class MyView(View): @silk_profile(name='View Blog Post') def get(self, request): p = Post.objects.get(pk=post_id) return render_to_response('post.html', { 'post': p })
Context Manager
Using a context manager means we can add additional context to the name which can be useful for narrowing down slowness to particular database records.
def post(request, post_id): with silk_profile(name='View Blog Post #%d' % self.pk): p = Post.objects.get(pk=post_id) return render_to_response('post.html', { 'post': p })
Experimental Features
The below features are still in need of thorough testing and should be considered experimental.
Dynamic Profiling
One of Silk’s more interesting features is dynamic profiling. If for example we wanted to profile a function in a dependency to which we only have read-only access (e.g. system python libraries owned by root) we can add the following to settings.py to apply a decorator at runtime:
SILKY_DYNAMIC_PROFILING = [{ 'module': 'path.to.module', 'function': 'MyClass.bar' }]
which is roughly equivalent to:
class MyClass(object): @silk_profile() def bar(self): pass
The below summarizes the possibilities:
""" Dynamic function decorator """ SILKY_DYNAMIC_PROFILING = [{ 'module': 'path.to.module', 'function': 'foo' }] # ... is roughly equivalent to @silk_profile() def foo(): pass """ Dynamic method decorator """ SILKY_DYNAMIC_PROFILING = [{ 'module': 'path.to.module', 'function': 'MyClass.bar' }] # ... is roughly equivalent to class MyClass(object): @silk_profile() def bar(self): pass """ Dynamic code block profiling """ SILKY_DYNAMIC_PROFILING = [{ 'module': 'path.to.module', 'function': 'foo', # Line numbers are relative to the function as opposed to the file in which it resides 'start_line': 1, 'end_line': 2, 'name': 'Slow Foo' }] # ... is roughly equivalent to def foo(): with silk_profile(name='Slow Foo'): print (1) print (2) print(3) print(4)
Note that dynamic profiling behaves in a similar fashion to that of the python mock framework in that we modify the function in-place e.g:
""" my.module """ from another.module import foo # ...do some stuff foo() # ...do some other stuff
,we would profile foo by dynamically decorating my.module.foo as opposed to another.module.foo:
SILKY_DYNAMIC_PROFILING = [{ 'module': 'my.module', 'function': 'foo' }]
If we were to apply the dynamic profile to the functions source module another.module.foo after it has already been imported, no profiling would be triggered.
Code Generation
Silk currently generates two bits of code per request:
Both are intended for use in replaying the request. The curl command can be used to replay via command-line and the python code can be used within a Django unit test or simply as a standalone script.
Installation
Existing Release
Pip
Silk is on PyPi. Install via pip (into your virtualenv) as follows:
pip install django-silk
Github Tag
Releases of Silk are available on github.
Once downloaded, run:
pip install dist/django-silk-<version>.tar.gz
Then configure Silk in settings.py:
MIDDLEWARE_CLASSES = ( ... 'silk.middleware.SilkyMiddleware', ) INSTALLED_APPS = ( ... 'silk' )
and to your urls.py:
urlpatterns += patterns('', url(r'^silk', include('silk.urls', namespace='silk')))
before running syncdb:
python manage.py syncdb
Silk will automatically begin interception of requests and you can proceed to add profiling if required. The UI can be reached at /silk/
Master
First download the source, unzip and navigate via the terminal to the source directory. Then run:
python package.py mas
You can either install via pip:
pip install dist/django-silk-mas.tar.gz
or run setup.py:
tar -xvf dist/django-silk-mas.tar.gz python dist/django-silk-mas/setup.py
You can then follow the steps in ‘Existing Release’ to include Silk in your Django project.
Roadmap
I would eventually like to use this in a production environment. There are a number of things preventing that right now:
- Effect on performance.
- For every SQL query executed, Silk executes another.
- Questionable stability.
- Space concerns.
- Silk would quickly generate a huge number of database records.
- Silk saves down both the request body and response body for each and every request handled by Django.
- Security risks involved in making the Silk UI available.
- e.g. POST of password forms
- exposure of session cookies
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-silk/0.1.1/ | CC-MAIN-2020-10 | refinedweb | 1,020 | 50.33 |
Swifty Tips ⚡️
Subtle best practises that Swift developers are keeping secret.
. So in this post, I want to talk about the not-very-obvious practises that I am using right now for iOS development.
You are more than welcome to use them, criticize them or improve them.
Let’s begin. 🚀
1- Avoid overusing reference types
You should only use reference types for live objects. What do I mean by “live”? Let’s look at the example below.
struct Car {
let model: String
}
class CarManager {
private(set) var cars: [Car]
func fetchCars()
func registerCar(_ car: Car)
}
🚗 is just a value. It represents data. Like
0. It’s dead. It doesn’t manage anything. So it doesn’t have to live. There is no point defining it as a reference type.
On the other hand;
CarManager needs to be a live object. Because it’s the object that starts a network request, *waits* for the response and stores all fetched cars. You cannot execute any async action on a value type, because, again, they are dead. We expect to have a
CarManager object, which lives in a scope, fetches cars from server and registers new cars like a real manager to a real company.
This topic deserves its own blog post so I will not go any deeper. But I recommend watching this talk from Andy Matuschak or this WWDC talk to understand why this is so important to build bullet proof apps.
2- Never(?) use implicitly unwrapped properties
You should not use implicitly unwrapped properties by default. You can even forget them in most cases. But there may be some special cases in which you need this concept to please the compiler. And it’s important to understand the logic behind it.
Basically, if a property must be nil during initialization, but will be assigned later on to a non-nil value, you can define that property as implicitly unwrapped. Because you will never access it before it’s set so you wouldn’t want compiler to warn you about it being nil.
If you think about view — xib relationship, you can understand it better. Let’s say we have a view with
nameLabel outlet.
class SomeView: UIView {
@IBOutlet let nameLabel: UILabel
}
If you define it like this, compiler will ask you to define an initializer and assign
nameLabel to a non-nil value. Which is perfectly normal because you claimed that
SomeView will always have a
nameLabel. But you cannot do it because the binding will be done behind the scenes in
initWithCoder. You see the point? You are sure that it will not be nil, so there is no need to do a nil-check. But at the same time, you cannot (or should not) populate it.
In this case, you define it as an implicitly unwrapped property. It’s like signing a contract with the compiler:
You: “This will never be nil, so stop warning me about it.”
Compiler: “OK.”
class SomeView: UIView {
@IBOutlet var nameLabel: UILabel!
}
Popular question: Should I use implicitly unwrapping while dequeueing a cell from table view?
Not very popular answer: No. At least crash with a message:
guard let cell = tableView.dequeueCell(...) else {
fatalError("Cannot dequeue cell with identifier \(cellID)")
}
3- Avoid
AppDelegate overuse
AppDelegate is no place to keep your
PersistentStoreCoordinator, global objects, helper functions, managers, etc. It’s just like any class which implements a protocol. Get over it. Leave it alone.
I understand you have important stuff to do in
applicationDidFinishLaunching but it is too easy to get out of control as the project grows. Always try to create separate classes (and files) to manage different kind of responsibilities.
👎 Don’t:
let persistentStoreCoordinator: NSPersistentStoreCoordinator
func rgb(r: CGFloat, g: CGFloat, b: CGFloat) -> UIColor { ... }
func appDidFinishLaunching... {
Firebase.setup("3KDSF-234JDF-234D")
Firebase.logLevel = .verbose
AnotherSDK.start()
AnotherSDK.enableSomething()
AnotherSDK.disableSomething()
AnotherSDK.anotherConfiguration()
persistentStoreCoordinator = ...
return true
}
👍 Do:
func appDidFinishLaunching... {
DependencyManager.configure()
CoreDataStack.setup()
return true
}
#FreeAppDelegate
4- Avoid overusing default parameters
You can set default values to parameters in a function. It’s very convenient because otherwise you end up creating different versions of the same function as below just to add syntax sugar.
func print(_ string: String, options: String?) { ... }
func print(_ string: String) {
print(string, options: nil)
}
With default parameters, it becomes:
func print(_ string: String, options: String? = nil) { ... }
Easy, right? It’s super simple to set a default color for your custom UI component, to provide default options for your
parse function or to assign a default timeout for your network component. But… you should be careful when it comes to dependency injection.
Let’s look at the following example.
class TicketsViewModel {
let service: TicketService
let database: TicketDatabase
init(service: TicketService,
database: TicketDatabase) { ... }
}
Usage in
App target:
let model = TicketsViewModel(
service: LiveTicketService()
database: LiveTicketDatabase()
)
Usage in
Test target:
let model = TicketsViewModel(
service: MockTicketService()
database: MockTicketDatabase()
)
The very reason you have protocols here for service (
TicketService) and database (
TicketDatabase) is to abstract away from any concrete types. This enables you to inject whatever implementation you like in
TicketsViewModel. So if you inject
LiveTicketService as a default parameter into
TicketsViewModel, this would actually make
TicketsViewModel dependent to
LiveTicketService, which is a concrete type. It conflicts with what we are trying to achieve in the first place, right?
Not convinced yet?
Imagine that you have
App and
Test targets.
TicketsViewModel normally will be added to both targets. Then you would add
LiveTicketService implementation into
App target, and
MockTicketService implementation into
Test target. If you create a dependency between
TicketsViewModel and
LiveTicketService, your
Test target wouldn’t compile because it doesn’t (shouldn’t) know about
LiveTicketService!
Aside from this, I think it’s also self documenting and safe by design to inject dependencies manually.
5- Use variadic parameters
Because it’s cool, super easy to implement and powerful.
func sum(_ numbers: Int...) -> Int {
return numbers.reduce(0, +)
}
sum(1, 2) // Returns 3
sum(1, 2, 3) // Returns 6
sum(1, 2, 3, 4) // Returns 10
6- Use nested types
Swift supports inner types so you can (should) nest types wherever it makes sense.
👎 Don’t:
enum PhotoCollectionViewCellStyle {
case default
case photoOnly
case photoAndDescription
}
You will never use this enum outside a
PhotoCollectionViewCell so there is no point putting it in global scope.
👍 Do:
class PhotoCollectionViewCell {
enum Style {
case default
case photoOnly
case photoAndDescription
}
let style: Style = .default
// Implementation...
}
This makes more sense because
Style is a part of
PhotoCollectionViewCell and is 23 characters shorter than
PhotoCollectionViewCellStyle.
7- Go
final by default 🏁
Classes should be
final by default because you generally don’t design them to be extendible. So it’s actually an error not to make them
final. For example, how many times you subclassed your
PhotoCollectionViewCell?
Bonus: You get slightly better compile times.
8- Namespace your constants
Did you know that you can namespace your global constants properly instead of using ugly prefixes like
PFX or
k?
👎 Don’t:
static let kAnimationDuration: TimeInterval = 0.3
static let kLowAlpha = 0.2
static let kAPIKey = "13511-5234-5234-59234"
👍 Do:
enum Constant {
enum UI {
static let animationDuration: TimeInterval = 0.3
static let lowAlpha: CGFloat = 0.2
}
enum Analytics {
static let apiKey = "13511-5234-5234-59234"
}
}
My personal preference is to use only
C instead of
Constant because it’s obvious enough. You can choose whichever you like.
Before:
kAnimationDuration or
kAnalyticsAPIKey
After:
C.UI.animationDuration or
C.Analytics.apiKey
9- Avoid
_ misuse
_ is a placeholder variable which holds unused values. It’s a way of telling “I don’t care about this value” to the compiler so that it wouldn’t complain.
👎 Don’t:
if let _ = name {
print("Name is not nil.")
}
Optional is like a box. You can check if it’s empty just by peeking into it. You don’t have to take everything out if you don’t need anything in it.
👍 Do:
- Nil-check:
if name != nil {
print("Name is not nil.")
}
- Unused return:
_ = manager.removeCar(car) // Returns true if successful.
- Completion blocks:
service.fetchItems { data, error, _ in
// Hey, I don't care about the 3rd parameter to this block.
}
10- Avoid ambiguous method names
This actually applies to any programming language that needs to be understood by humans. People should not put extra effort in understanding what you mean, it is already hard to understand computer language!
For example, check this method call:
driver.driving()
What does it really do? My guesses would be:
- It marks driver as
driving.
- It checks if driver is
drivingand returns
trueif so.
If someone needs to see the implementation to understand what a method does, it means you failed naming it. Especially, if you are working in a team, handing over old projects, you will read more than you write code. So be crystal clear when naming things not to let people suffer understanding your code.
11- Avoid extensive logging
Stop printing every little error or response you get. Seriously. It’s equivalent to not printing at all. Because at some point, you will see your log window flowing with unnecessary information.
👍 Do:
- Use
errorlog level in frameworks you use.
- Use logging frameworks (or implement it yourself) which let you set log levels. Some popular frameworks: XCGLogger, SwiftyBeaver
- Stop using logging as a primary source for debugging. Xcode provides powerful tools to do that. Check this blog post to learn more.
12- Avoid disabling unused code
Stop commenting-out code pieces. If you don’t need it, just remove it! That simple. I have never solved a problem by enabling legacy code. So clean up your mess and make your codebase readable.
What if I told you…
…that you can achieve most of it with automation? See Candost’s post on Using SwiftLint and Danger for Swift Best Practices.
Thanks for scrolling all the way!
Please let me know if you have any other secret practices and help spread the word. ❤️ | https://medium.com/@gokselkoksal/swifty-tips-%EF%B8%8F-8564553ba3ec | CC-MAIN-2017-34 | refinedweb | 1,648 | 58.28 |
I am using Privoxy 3.0.10.0 to filter web pages before they're passed on to the browser.
I can't figure out why this simple regex doesn't trigger a rewrite. Maybe someone more experienced will have an idea:
Here's what it looks like when I hit Firefox's CTRL-U to view the HTML source:
<font color=#FF4AFF>JohnDoe</font>
Here's my regex; I've also added the "i" switch to ignore case, to no avail
s|(<font color=.+?>JohnDoe</font>)|<span class=myclass>$1</span>|g
Thanks for any hint.
<span>
JohnDoe
<span class=myclass>JohnDoe</span>
<span class=myclass><font color=#FF4AFF>JohnDoe</font></span>
This question came from our site for professional and enthusiast programmers.
The regex itself works fine, as this Python example shows:
import re
print re.sub(r"(<font color=.+?>JohnDoe</font>)",
r"<span class=myclass>\1</span>",
"<font color=#FF4AFF>JohnDoe</font>")
# Prints <span class=myclass><font color=#FF4AFF>JohnDoe</font></span>
(assuming Privoxy uses the same regex syntax, barring the \1 vs. $1 difference, but it looks like it does.)
\1
$1
I guess the problem lies elsewhere - try a regex that can't fail, like replacing a with b, to see whether it's having any effect at all.
a
b
Thansk guys. Turns out Privoxy was greedy, and I didn't notice that it was taking much more data than I thought.
Not sure what RE engine you're using, but try changing the $1 to \1 - that's how backreferences are usually referred to in perl, at least.
<span class=myclass>$1<
157 times
active | http://superuser.com/questions/12765/cant-figure-out-why-this-regex-doesnt-apply/12860#12860 | crawl-003 | refinedweb | 272 | 64.3 |
Use a
while(true) loop to have the program repeat commands forever
- Inside of
int main(), add a
while(true)control structure.
- Within the curly braces, add commands.
NOTE: Those shown in the example above display a message and spin the robot clockwise. In the example, the loop makes the displayed message display repeatedly until it runs off of the screen.
NOTE: Use
//notation to include comments that explain what that section of code does.
Code that can be copied and pasted:
#include "robot-config.h" int main() { //Loop to have the robot spin clockwise and display "It is true and the loop continues!" while(true){ Brain.Screen.print("It is true and the loop continues!"); LeftMotor.spin(directionType::fwd); RightMotor.spin(directionType::rev); } }
Or, use a
while() loop to have the program repeat the same commands if a condition is true
- Inside of
int main(), add a
while()control structure.
- Inside of the
while()parentheses, add a condition for the program to check.
NOTE: In the example above, the condition being checked is whether the brain's screen is pressed. In this case, the while loop will continue while the screen is not pressed because the condition is set to false.
- Within the curly braces of the
while(Brain.Screen.pressing()==false)structure, add commands.
NOTE: In the example above, the two commands inside the while loop's curly braces keep both motors stopped when the screen is not pressed. The program stays within that loop unless the brain's screen is pressed. If/when it is, the program breaks out of the loop and continues with the next commands in the program: displaying a message and moving forward for three seconds before stopping.
NOTE: Use
//notation to include comments that explain what the section of code does.
Code that can be copied and pasted:
#include "robot-config.h" int main() { //Loop to have the robot remain stationary until the screen is pressed. while(Brain.Screen.pressing() == false) { LeftMotor.stop(); RightMotor.stop(); } Brain.Screen.print("I've been pressed and I'm moving forward for 3 seconds!"); LeftMotor.spin(directionType::fwd); RightMotor.spin(directionType::fwd); task::sleep(3000); LeftMotor.stop(); RightMotor.stop(); } | https://kb.vex.com/hc/en-us/articles/360035593232-While-Loops-VEX-C- | CC-MAIN-2022-21 | refinedweb | 363 | 66.03 |
Failing to create pod sandbox on OpenShift 3 and 4
Environment
- Red Hat OpenShift Container Platform
- 3.x
- 4.x
Issue
- Getting the following error when trying to restart a pod:
- While installing elasticsearch operator in ocp4.x
Failed create pod sandbox: rpc error code: = Unknown desc = [failed to set up sandbox container.
- Getting
NetworkPlugin cni failed to set up poderror message.
Resolution
Delete the OpenShift SDN pod in error state identified in Diagnostics Steps field:
$ oc delete pod ${podname}
Fix of upstream dns sever resolved the issue
Root Cause
- One of the OpenShift SDN pods in that particular namespace was corrupted. So, there was no network available to run pods.
- From the operator pod , it was not resolving the quay.io and hence the upstream dns server was checked and found issue was there.
Diagnostic Steps
Run the following command to inspect pods state and check the output for OpenShift SND pods in error state:
$ oc get pods --all-namespaces openshift-sdn ovs-wrzr9 1/1 Running 4 94d openshift-sdn ovs-xg2wd 1/1 Running 7 94d openshift-sdn ovs-xtrsr 1/1 Running 11644 94d openshift-sdn ovs-z6jps 1/1 Running 3 94d openshift-sdn ovs-zphdl 1/1 Running 8 94d openshift-sdn ovs-zqtfg 1/1 Running 6 94d
NOTE: the list above shows that pod
ovs-xtrsr had restarted 11644 since creation. That is the one to be recreated.
- For OCP4.3 and Elasticsearch operator issue
# oc rsh certified-operators-866f85886d-5b6h9 sh-4.2$ nslookup quay.io Server: 192.168.100.1 Address: 192.168.100.1#53 Non-authoritative answer: Name: quay.io.abcd.example.com Address: 192.168.100.200 <---upstream dns server ** server can't find quay.io.abcd.example..com: SERVFAIL
- Product(s)
- Red Hat OpenShift Container Platform
- Component
- bind
- elasticsearch
- S. | https://access.redhat.com/solutions/4321791 | CC-MAIN-2021-04 | refinedweb | 304 | 57.27 |
I have a list of excel files and their corresponding sheet number. I need python to go to those sheets and find out the cell location for a particular content. Can someone point out the error in my code? Thanks in advance
import xlrd value = 'Avg.' filename = ('C:/002 Av SW W of 065 St SW 2011-Jul-05.xls', 'C:/003 Avenue SW West of 058 Street SW 2012-Jun-23.xls') sheetnumber = ('505840', '505608') dictionary = dict(zip(filename, sheetnumber)) for item in dictionary: book = xlrd.open_workbook(item) sheet = book.sheet_by_name(dictionary[key]) for row in range(sheet.nrows): for column in range(sheet.ncols): if sheet.cell(row,column).value == value: print row, column
You don’t need to make a dictionary. Iterate over
zip(filename, sheetnumber):
for name, sheet_name in zip(filename, sheetnumber): book = xlrd.open_workbook(name) sheet = book.sheet_by_name(sheet_name) for row in range(sheet.nrows): for column in range(sheet.ncols): if sheet.cell(row,column).value == value: print row, column
Tags: excel, file, pythonpython | https://exceptionshub.com/python-open-specific-excel-file-sheet-to-get-a-specific-content-cell-location.html | CC-MAIN-2021-10 | refinedweb | 170 | 64.07 |
Hi I’m totally new to Rhino, but it might be just what I’m looking for. I tried to install RhinoComputer by following this site:
It seems almost to work, except that in the RhinoCompute.cs it doesn’t seem to know QuadRemeshParameters and I get a “type or namespace could not be found”. I think it is in Rhino.Geometry which I have referenced and use in other contexts.
Also I get a “SubD is inaccesable due to protection level”. This is also in Rhino.Geometry I think.
I’m not really sure what’s up and any help is apriciated. | https://discourse.mcneel.com/t/subd-missing/111424 | CC-MAIN-2020-50 | refinedweb | 103 | 77.64 |
Re: Phanes
Parzival with his half-brother, the infidel Feirefiz. Manuscript Cgm 19, folio 49v, State Library, Munich.
"And they never did find out about it. He was murdered instead. Thus,
Persia remains arabic to this very day, never having come to know what
this man intended to bring to them. And this is very important for
consideration."
Steve
Bradford comments;
Let us examine some important issues when we learn to translate etheric and ethnic insights into new Sun mysteries. One can only be disappointed with most Anthros because they are extremely dense. Not all anthros of course but anthros are one of our only hopes.
But of course when the lumbering New AGE old streams, really Julian the Apostate Streams all surface in all their watered down and some detailed glory while good ole Spiritual Science represents the new Sun Mysteries of Michael, we have enormous conflicts within ourselves as to how to penetrate with living thinking the gold in the new etheric streams and rivers of heredity and humanity that are growing. These intellectual soul stumbling blocks confront us supporting our tendencies to grasp everything in a materialistic paradigm.
Nothing but dense disappointment usually awaits the unimaginative responses from delinquent anthros. We don't want, most of us don't want delinquent anthros, but there are too few who attempt to penetrate the fresh living new etheric/ethnic and twelvefold sun mysteries and enrich them and defend them. Our earlier great Anthros certainly tried to recover and maintain as much rich insight as possible against the amazing, thrashing, gnawing and gashing attacks that ripped through culture in WW I and WW II and here we sit on the brink of another full thurst Ahrimanic distortion of the Sun Mysteries with Islam and Persia in their sights.
Nor do they attempt to deriddle the new physics of the Sun with higher Devachan Science mysteries. Another failure that runs through Anthroposophy! Rather the lumbering anthro, with his static thought process does not attempt to grasp the intimate magnificence in Steiner's recovery of the Three fold Sun and the Logos of Love and the Logos of Wisdom in human beings. It cannot be allowed to dry up or be eaten again by an Orwellian attack like that which murdered Julian the Apostate and destroyed rich ancient texts and insights because they weren't Fundie or based on the shrunken model of Creationism. Jihad against the Sun Mysteries murdered Julian the Apostate and today threatens to suffocate Spiritual Science.
Now imagine that this invisible Sun Substance was nearly dried up by the time of Zarathustra/Jesus. So when a lofty BEING suddenly entered the realm of humanity that is the essence of raw Sun Logos of Love and raw Sun Logos of Human wisdom, it must find in humanity that very sustaining Sun substance - It isn't the outer Sun and the physical body of the Sun nor the materialistic physics that everyone is so comfortable with. It isn't that Sun force that we get a sun-tan on the beach with. It is rather the sustaining thing inside each human being that generates the Sun Forces and the forces that Christ sought to recover in humanity. Christ had to find this substance and recover it in humanity in order for humanity not to be swallowed whole by the Ahrimanic intentions. We are threatened again with being swallowed whole. STeiner and Germany was swallowed whole and Michael students are required to dig deep and connect deeper insights or they fail themselves! Yes, they fail Themselves!
But we can hardly get enough Anthros to think clearly or fluidly or with disciplined thinking on the rich world of the etheric and the current attacks on Islam surrounded by an Ahrimanized region of Israel and sealed with an Ahrimanized static religion such as the dead Hebrew forces that have crystallized into and the unresolved conflict of how much mighty Persia and Iran was the source point of grasping the Sun Mysteries themselves. Persia and Iran was the heroic Sun Point in human Initiation Wisdom that grasped the depth and sought a dialogue with the mighty Sun Being Himself. What if Persia and Iran knew all that? What if they knew all that?
Well the whole point in current politics and material thinking is to prevent anyone knowing all that the Michael School must wrestle to the ground and see with clarity. Ahrimanic America like Ahrimanic Germany once intent on the Jews, are now focused in murdering anew the Sun Mysteries and murdering anew Julian the Apostate. Only this time the New Sun mysteries in al their rich and living etheric reality comes in the form of Spiritual Science, the New Sun Mysteries. The New Physics Sun Mysteries with few defenders and few interested in translating deep etheric and ethnic issues into the TwelveFoldness required from Christian Rosenkreuz schooling and Spiritual Science.
Anthros student are requred to put two and two together and get off their asses and hold the Sun Mysteries alive during this terrific time of Ahrimanic attacks on Michael Intelligence. This do in remembrance of HIM. Now all this and more are living behind the simple issue that Steve brought between Julian and Constantine. It is rather for us what Steiner brought forward about the issues between Constantine and Julian and tody that must disturb or inspire us. It caught Steve and it catches our eye if we know how to look upon things. Now what part, besides the mighty Three Kings, what part did Persia and Islam play and where has the Grail Sciences and the Grail disappeared too? It disappeared into the dull caretakers and rich warriors of Spiritual Science Schooling and can only be us and friends like us who come close grasping our world, not in Nasa's terms but in Spiritual Science terms.
To examine Feirefiz, as the half-brother of Parsifal we come into the intimacies of the question that was raised by Steve. Herzeloyde fell in love with a guy, knight, who is killed in Baghdad. Parsifal's father is killed in Baghdad. Lots of stuff there including the great emissary schionatulander and Harou al Raschid and the rage attack and murder of Schionatulander in a fight that spared Parsifal's hide.
Parsifal's father and Feirefix father, Gahmuret loved two different women. We talk a great deal about Herzeloyde but this Gahmuret needed to pay child support by today's standards and was, by our standards of morality someone who two-timed lonely Herezeloyde. But Gahmuret had fairy blood mixed in his veins. Of course when we say this, we are not so much kidding as dull and stupid Anthros think. Dull and stupid Anthros cannot grasp chemistry and physics with real beings in them. If you don't grasp etheric elemental nature, the temperaments of humans and the unique construction of individuals or grasp Bio-weaponry and the misuse of elementals and Chemistry, well you merely guffaw at statements that apply to Gahmuret. But Anthros have no room to snicker, they are basically too dull and stupid to even begin to swagger and snicker here.
When we outline the Rhine gold and the Rhine Maidens as we have, we get locked into the problems that Wagner revealed once he spoke of the rivers and streams the mighty rivers where different etheric waters, different Undines, gnomes, sylphs and salamanders contribute from the high Angelic Worlds, that forces shaped themselves from The Nile, The Ganges, The Yangtze, the Mississippi into different ethnic/etheric nymph and undine capacities and Wagner happened to pick on Germany. Germany went to piss and it seems Wagner helped it go to piss. But that is not the case, the case is rather that Anthros and Michael Students have failed to grasp the secrets that Wagner/Merlin brought with him from the death of the Celtic Folk Soul into the the great Folk Soul of the Consciousness Soul that Germany could have been the bearer and the German Language could have been the bearer of. Anthros don't consider it worth their while to grasp anything but the silliness of Wagner....
Now the point is that we can prove that our pedestrian discussions on racism and our politically correct little mantrams won from the civil rights and our intense annhiliation of the Jews and now the Islamic and Arab streams, open gateways to how different cultures were to handle and blend the mighty Sun Mysteries. And in contrast who cold and heartless are the attacks on anyone who attempts to bring the Sun Mysteries and antidote materialism against the freezing and drying up into an Ahrimanic Intellectual Planet of the dead!
Steiner has said, if you follow this thread backwards, Steiner brought that indeed most of the New AGE old stuff that has surfaced would have been in quality and kind, the dead world of the ancient depth of the Three Fold Sun and that great texts, images and artifacts were utterly destroyed to prevent humanity from arriving at the insight of the ThreeFold Sun, which has been the basis of our research into the Streaming in, the raying in of the TwelveFoldness of the Elohim Christ, into His constellation of disciples.
We will have to explore how Zarathustra prepared his students to grasp this Sun like navigation that is in the Brain mass and how Brain and Spine are Sun and Moon forces. We shall examine this issue, but first and foremost it is important to follow the underground threads that unite the Persian/Arab and Iranian streams of Prester John, and the mythical Shamballa to the Parsifal destiny and the current aggressive attacks from the Ahrimanized West on the Iranian and Persian fronts. To even hold for a moment in our mind's eye that this ancient culture lay frozen time because it expected to be included in the mighty sun mysteries and wasn't, reveals an horrfic crime of freezing an entire group, Jew or Arab in time. Freezing Light, slowing light, sub-freezing light and holding Christianity captive to Fundie absolute Bullshit and literally retarding human goodness, human greatness and human intelligence.
For indeed when we study the blending and mixing of the various impulses and say from the 9th and 10th centuries, The Sun Forces that had imprinted themselves into the new etheric forces of disciples and thousands upon thousands who had participated in the Golgotha Mysteries, found these etheric Sun forces in the stream of the historical and real Grail families as new incarnational, moral and Living Sun residues of their participation with the Christ Event, we no longer just dawdle with the Da Vinci Code debacle.
It isn't trying to trace the affairs that Jesus had with Magdalene that is just another red herring. It is fashioning as Steiner indicated that Julian the Apostate was the Sun-Set of the mysteries and Spiritual Science is the Sun-Rise of the mysteries. That Sun-Rise includes the fulfillment of the Persian stream and the recognition that is held back from the world that the Persian Zarathustra fulfilled the mission of the Ahura MazTAO and brought the Sun Mysteries into his higher Consciousness Soul faculties and used the immense Persian Star wisdom to achieve the Christ Encounter of the First Order is a rich research gift that should be explored without reserve. Persia and Iran would and can still arise in an entirely different light once we learn how to change our language and research into the new spiritual science language of the Sun-Rise.
. "
Grail Maiden Respanse de Shoye
. "
. "
"His name is Feirefiz and his story originated during the time of Arthurian legends. He embodies the then-held belief that a child born to an interracial couple would be spotted.
"A knight by the name Gahmuret travels to Baghdad and Africa, the lands of "heathens" or infidels. Through knightly competition he wins the kingdom of Zazamanc and makes Belacane, a Moor, his Queen. Gahmuret soon sneaks away leaving a pregnant Queen Belacane who then gives birth to their son Feirefiz, who is born white with black spots.
"Gahmuret re-marries and has another son, Parsifal. Neither Parsifal nor Feirefiz knows of the other's existence until they meet in battle. They engage in a deadly fight when Parsifal's sword breaks. Feirefiz refuses to continue and reveals his identity which allows the two brothers to learn of their family connection. "
" We may need to speak
about the individual spiritual beings that live in the solid and fluid
elements and so forth. As long as we avoid talking about these beings,
we are talking about a dead science that is not imbued with the Christ.
To speak about them is to speak in a truly Christian sense. We must
imbue all of our scientific activity with the Christ. More than that, we
must also bring the Christ into all of our social efforts, all of our
knowledgein short, into all aspects of our life. The Mystery of
Golgotha will truly bear fruit only through human strength, human
efforts, and human love for each other. In this sense, anthroposophy in
all its details strives to imbue the world with the Christ."
Rudolf Steiner
Bradford comments;
The good ole Nile, also filled with tales that should also sound like the Rhine Maiden saga of Wagner. The good ole Amazon should also have a tale of the stream of gold or white river dolphins that represents the Amazon as the Rhine represented the German. The Yangtze folk lore should also contain some aspect of the richness of this particular ethnic and etheric group. The Euphrates, polluted now with Deleted Uranium and invaded by White bread American intelectuallity and utter Ahrimanic deceptions, lies and glut, has misplaced tale of the origin of the Garden of Eden and the sword thrusting Adam and Eve out.... The Tigris and the Euphrates certainly were not what we call the cradle of civilization, but the myth seemed to work for isn't the Bible itself a collection of tales the ancient Atlantis and Noah events? Tales wrapped in Summerian sweetbread so that the general reductionism of the vast picture of human spiritual descent could be outlined in brief? A Brief history of recorded Time? Now a rigid dogma called Fundamentalism, creationism and intelligent design?
The Ganges, certainly we can find hundreds of tales like the Rhine Maiden depicting the mysteries of the Ganges and how this river, not unlike the Imagination Wagner offered of the Rhine, so the Ganges as well is laced with rich, rich spiritual history... But exactly what happens when the Christ Event kicks in in Time and changes the whole paradigm of how humanity has previously understood things? From that Time on, something changed, Wagner picked up on it, but it was branded to the etheric bioregionalism of Germany. But here, in Mark Twain we have the mighty Mississippi and Twain's Faust, called "The Mysterious Stranger" and Twain's American folk hero, Mr. H. Finn. The naive American folk tied to the Mississippi the way Wagner tied Germany to the Rhine. How do we view bioregionalism and etheric geography and areas and places where cultures arose... we examine souls hardly different than the gifted Wagner or pinned Rhine Gold to the Germans, while Twain pinned America to the mighty Mississippi. But then we have to understand what changes have arisen since the impact of the Christ event on Earth.
- Carol wrote:"Both are none other than one as a result that I ( through choise?) took on thepresidency of the society- the movement (as such and in a context of rapport)became none other than one with her(the society)."Jean-Marc writes:Through choice?Definitely.Probably the toughest choice he ever made.In Steiner's words, there was "an absolute risk" (un risque absolu!) involvedin him becoming president of the AS: the eventuality that the spiritual powerswould deny Rudolf Steiner the possibility of being *both* their messenger[who had occult obligation towards the spiritual world and its revelations]and the president of the Anthroposophical Society [with the inherent outerand administrative concerns]. In other words, the former benevolent attitudeof the spiritual powers was at stake: would the spiritual world, would theheavens close their gates?But his *sacrifice* led to the exact opposite outcome: their benevolencebecame greater than ever before...One important aspect of this mystery is: Michael's enemies --- the *demons*who had managed until then to prevent Steiner from speaking of facts he hadknown for years or decades --- were themselves reduced to silence!In other words, the lectures on *Karmic Relationships* directly result fromthis sacrifice...Carol wrote:"You wrote: 'He represented on Earth the actual source of the reflectionof the spiritual stream he mentioned.'I imagine by 'actual' you mean genuine, authentic... (?)"Jean-Marc writes:I simply meant that there would be no reflection of the spiritual streamat all [i.e., no anthroposophical movement on Earth at all!] --- if RudolfSteiner had not initiated it, if he had not brought the initial impelling force.He was the *Prime Mover*, as it were... ;-)Carol wrote:(note, the exeptional phenomena of a stigmatized individual for the first timeis attached to a spiritual impulse which is not Catholic)Jean-Marc writes:As we say in France, I sure would like to be a little mouse ( I'm Jerry! :-)in Rome --- because I'm wondering what kind of buzz this event is gettingbehind closed doors, in the Vatican.But I'm not so sure it's such *a grand event* within the Anthro world...After all, the crucifixion is --- Death, and the blood pouring from the woundsare [Steiner dixit] an expression of the excess of selfishness, an expression ofthe excess of egoism.I'm not talking about JvH as a person, of course.Gee, I thought Anthroposophy was mainly concerned with the Resurrection...Carol wrote:"This physical malaise of which he mentioned could indicate that in the monthsleading up to Christmas of that year, Steiner's soul was becoming everincreasingly sensitive to 'spirit dynamics' in their more specific form and thatthis sensitivity was reflecting itself within his physical organism."Jean-Marc writes:One should bear in mind that the First Goetheanum was burned to ashes...And an important occult consequence of this tragedy (it seems to me) wasthat --- since January 1923 [ he wrote this to Marie Steiner] --- RudolfSteiner's supersensible bodies were no longer completely connected to hisphysical body...Carol wrote:"As for Rudolf Steiner the man, individual, mystic, teacher, founder of anesoteric movement and Society, representing MAYA, I'm confused."Jean-Marc writes:I was merely alluding to the fact that, at least in my eyes,the *Prime Mover* of the Anthroposophical Movement could not havebeen --- simply and solely human.J-M---------------------------------------------------------------------------------------------
--- In anthroposophy@yahoogroups.com, "carol" <organicethics@...> wrote:
>
>
> Jean Marc, this might seem silly but I translated the French excerpt
> taking the word and sense, while disregarding any form appropriate to
> the english language:
>
> "Both are none other than one as a result that I ( through choise?) took
> on the presidency of the society- the movement (as such and in a context
> of rapport) became none other than one with her(the society)."
>
> You wrote: "He represented on Earth the actual source of the reflection
> of the spiritual stream he mentioned."
>
> I imagine by 'actual' you mean genuine, authentic... (?)
>
> When you wrote that Steiner was "the incarnation of the movement itself
> as it were; would that also imply, incarnation 'for, in the service' of
> the movement/ impulse?
>
> You wrote: "From the Christmas Conference on, the society was supposed
> to *practice (to do) Anthroposophy* --- instead of merely managing the
> teachings!"
>
> I have the concept of 'express the impulse of' as well, in somewhat of a
> similar quality which for some time appeared within the now fading
> spirit impulse bestowed onto and attached to the Catholic Church. This
> impulse 'expressed' itself quite vividly throughout a period of 'world
> becoming' through it's follower's reflections, actions etc and decisions
> tied to the growth of it's intitution.
>
> (note, the exeptional phenomena of a stigmatized individual for the
> first time is attached to a spiritual impulse which is not Catholic)
>
> And it's funny how earlier today I reflected on an idea which you
> presented in your last post concerning this topic and to which you now
> add on to it with a desciption of "the little Dragons inoculate
> Anthroposophia with their mortal intellectual venom.."
>
> To the original idea in question ("the Body [i.e., the AS as an earthly
> *administrative* body] was utterly unfit (or unwilling) to accommodate
> the living Spirit, and rejected it (!) --- the AS thus being born
> (founded anew) as a sickly retarded child...[even perhaps as a nearly
> stillborn child]. "), you now have this to add, but attached to it is
> now a new question about the 9 months running up to the the Christmas
> Conference.
>
> You wrote..."
>
> My point about Steiner 'being human after all', would allow for us to
> accept that he should have been free to express any physical discomfort
> which he would have experienced as a result of the quality of his soul's
> gaze into the 'spiritual dynamics' which were already in place within
> the Society prior the the 'Christmas Conference event'. This physical
> malaise of which he mentioned could indicate that in the months leading
> up to Christmas of that year, Steiner's soul was becoming ever
> increasingly sensitive to 'spirit dynamics' in their more specific form
> and that this sensitivity was reflecting itself within his physical
> organism.
>
> We know through what Steiner mentioned vis a vis Christmas/Easter times
> in reference to spirit impulses, that his becoming president of the
> Society at the time of the Christmas Conference, in addition to what he
> further explained about it- indicate that this event, seeing as it
> involved HIM being one aspect at the center of it's importance, must
> have initiated in HIM additional supersensible forces and/or a deeper
> sense of responsibility towards humankind's spiritual direction.
>
> And so, this joins the idea which you put forward in a previous post
> which stipulates that there may not have been an outright crime of
> 'poisining' but instead, that Steiner's physical degeneration may have
> been a direct result of him becoming somewhat DIRECTOR of an esoteric
> Society, and as such, he humanly found himself placed in a position
> which allowed him to be more exposed to 'receive' Ahrimanic led attacks-
> of which their force would have been cultivated within and originated
> from within the very souls of members of the Society; he would have had
> to assume the collective force of these sub strata attacks along with
> what was already familiar to him, ex. the antipathic spirit forces
> lurking within the machinations of outward society.
>
> The 9 month period which caracterizes the amount of time which his
> soul/body sustained his renewed earthly 'responsibilities' could well be
> considered significant. I agree.
>
> As for Rudolf Steiner the man, individual, mystic, teacher, founder of
> an esoteric movement and Society, representing MAYA, I'm confused.
>
> Thanks for the interest, Carol.
>
> --- In anthroposophy@yahoogroups.com, "jmn36210" jmnguyen@ wrote:
> >
> > Carol wrote: !"
> >
> ------------------------------------------------------------------------\
> \
> > ------- Let me quote Rudolf Steiner (in French :-) from GA 240 [Karmic
> > Relationships, VI - not available online]: "Les deux ne font qu'un:
> > *du fait que* je suis moi-même devenu le président de la
> > société, le mouvement ne fait plus qu'un avec elle." (Editions
> > Anthroposophiques Romandes, 1986) [Sie sind beide eins: Denn damit,
> > daß ich selber Vorsitzender der Gesellschaft geworden bin, ist die
> > anthroposophische Bewegung eins geworden mit der Anthroposophischen
> > Gesellschaft.] [The two have merged into one: *due to the fact that* I
> > became president of the society, the movement and the society have
> > merged into one.] How does Rudolf Steiner characterize the
> > anthroposophical movement? "...the anthroposophical movement, which
> > represents on Earth the reflection of a spiritual stream..." (August
> 12,
> > 1924). That's my point. According to Steiner himself: until the
> > Christmas Conference, the society was merely *managing* [gérer] the
> > anthroposophical teachings that he, the teacher, brought to the
> society
> > --- from the outside (he wasn't a member of the Anthroposophical
> > Society...). From the Christmas Conference on, the society was
> supposed
> > to *practice (to do) Anthroposophy* --- instead of merely managing the
> > teachings! Steiner was the earthly focal point of a spiritual lens as
> it
> > were through which spiritual light was pouring down over Mankind; he
> was
> > an interface at the center of a *lemniscate* so to speak, the point of
> > contact and conscious communication between the spiritual world, the
> > spiritual beings, the spiritual impulses and intentions, and the
> > physical world with earthly Mankind imprisoned in the Satanic delusion
> > of materialism. He represented on Earth the actual source of the
> > reflection of the spiritual stream he mentioned. Anthroposophical
> > spiritual science is not merely (spiritual) knowledge; it is *living*
> > spiritual wisdom --- and this is the real reason why it is *so often*
> > caricatured by Anthros, who --- intellectually assassinate
> > Anthroposophia! [i.e., the little Dragons inoculate Anthroposophia
> with
> > their mortal intellectual venom instead of quenching their thirst with
> a
> > living *elixir*(Steiner's own word) and curing their moribund psyche
> > with the only true antidote available] Rudolf Steiner was the earthly
> > source, bearer and messenger of the *living Spirit* - the incarnation
> of
> > the movement itself as it were; and this is why the two merged into
> one
> > when he became president of the society --- i.e., when the living
> Spirit
> > was embodied in the Society. Thus the drastic change that was
> expected.
> > But, as I said, I've always felt that Steiner's immediate poisoning
> was
> > extremely symptomatic of a bodily *rejection*... The fact is - that
> > Steiner was not able to see his last lecture, his last address,
> through
> > to completion (28 September 1924) - and that is exactly 9 months [a
> > human gestation period] after the Christmas Conference... Did anything
> > happen exactly 9 months before the Christmas Conference (which is
> > obviously an essential point in time, a landmark event in spiritual
> > history)?..." [Für die
> > Gesellschaft habe ich eigentlich nur zu sagen, daß ich am liebsten
> > nichts mehr mit ihr zu tun haben möchte. Alles, was
> derenVorstände
> > tun, widert mich an.] You see, *sacrifice* has many meanings... Yes,
> > He was human, wonderfully human... But I believe this is a half-truth,
> a
> > partial truth. And, since Steiner insisted several times that
> > half-truths are much more dangerous than complete untruths, let me add
> > another part of the truth, in my own eyes at least: RUDOLF
> > STEINER = MAYA (The sanskrit word maya means *illusion*, as you know.)
> > Jean-Marc
> >
> ------------------------------------------------------------------------\
> \
> > --------------
> > --- In anthroposophy@yahoogroups.com, "carol" organicethics@ wrote:
> > >
> > >
> > > Jean Marc wrote: ..."
> > >
> > > This is very interesting. Apparently Judith von Halle received her
> > > Stigmata near Easter of (2004)? I'm sorry, I have limited memory
> > range.
> > > At any rate, I did register that it occured surrounding the Easter
> > > celebration of likely 2004, perhap 05. This might likely indicate a
> > > social seed/impulse signature being resurected with more spirit
> depth
> > > using the soul forces held by the incarnated Ms Halle.
> > >
> > > Jean Marc continues: ."
> > >
> > > Yes, I believe that Theosophical spiritual development is
> fundamental
> > to
> > > being able to effectively and freely act as a given medium (having
> > > oneself ignited creative spirit forces), who can PERMIT THE PASSAGE
> of
> > > the spirit forces which animate specific social seeds/impulses to
> > > traverse from out of the Higher Devachanic realm into the Earthly
> > > Etheric.
> > >
> > > JM: !
> > >
> > > JM: ."
> > >
> > > That's quite possible. I don't think anyone of us really knows
> exactly
> > > what kind of infiltration and betrayal scenario took place, however,
> > I'm
> > > sure that some people at the time of the crime may have been able to
> > > gather some clues...
> > >
> > > JM: ."
> > >
> > > It's interesting how once Steiner had consciously bound his Karma to
> > the
> > > earthly face of Anthroposophy (Movement + Society) and once a
> > stricking
> > > (fatal) occult attach has played itself out, he witnessed 'the
> > Spiritual
> > > Gates open wide'. I'm wondering if the Spiritual Gates opened wide
> in
> > > the general sense, or specifically for his own reference. I'm
> guessing
> > > that it was the former.
> > >
> > > Nice post, Carol.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > --- In anthroposophy@yahoogroups.com, "jmn36210" jmnguyen@ wrote:
> > > >
> > > >
> > > >
> > > > Robert wrote: ean-Marc writes: (First of all, I'm sorry
> for
> > > > the delay...) Robert, no one (!) is assuming --- "that we have no
> > > > freedom and power *whatsoever* within and upon the
> socio-historical
> > > > process"(sic) ... Basically, I believe you've misinterpreted what
> my
> > > > *frogman* example was supposed to illustrate: from the occult
> > > > perspective of Steiner's revelations regarding the historical
> > becoming
> > > > of Mankind, the effectiveness of all *social* seeds or actions ---
> > > > whether free or unfree! --- endures for three generations, for
> three
> > > > 33-year cycles, i.e., for a century. Period. Rudolf Steiner never
> > > > alludes to the necessity of discriminating between free and unfree
> > > > actions in the lecture (December 26, 1917) --- imo, because it is
> > > > entirely irrelevant from that particular standpoint. In other
> words,
> > > > whether you are "making a pair of shoes" (Steiner's example) as an
> > > > enlightened *PoF* enthusiast, or as a 10-year-old Asian child
> > working
> > > 16
> > > > hours a day for a big Western Corporation making handsome profits
> > > thanks
> > > > to modern slave labor, the occult phenomenon [the *objective
> > > phenomenon*
> > > > I was alluding to in my previous post...] Steiner is drawing our
> > > > attention to is essentially the same: one century is the occult
> > > > lifetime of any [free or unfree, minute or gigantic] *social*
> > impulse,
> > > > as far as its effectiveness is concerned in the historical
> becoming
> > of
> > > > Mankind. Beyond the one century deadline, the initial *momentum*
> of
> > > the
> > > > original impulse or social seed is no longer effective; therefore,
> > the
> > > > social seeds must be sown periodically. Hence perhaps the constant
> > > > return (every century!) of the great spiritual figures I
> mentioned.
> > > > Unfortunately, the lectures are not available online --- but
>. (Cf. his letter to Marie von Sivers from
> > > January
> > > > 9, 1905). Robert wrote:ean-Marc writes: Of course, but isn't Adolf Hitler ---
> emblematic?
> > > > The fact is that for many years I've been wondering if AH wasn't
> > some
> > > > sort of macrocosmic projection of the "Lesser Guardian of the
> > > Threshold"
> > > > --- i.e., a personification of the karmic accumulated result of
> the
> > > > spiritual evolution of Humanity up to the time of the *Second
> > > Coming*...
> > > > A pathetic caricature which happens to be a true image of our own
> > > > frantic inner little *Führer*, of our own inner little *Guide*,
> > of
> > > > our own (sadly repressed :-) egotism and egoism?... Perhaps an
> acid
> > > test
> > > > for too sleepy people? :-) And, from this standpoint of mine,
> > "Little
> > > > Boy" would be a demonic counter-image of the "Greater Guardian of
> > the
> > > > Threshold"... As I said, just wondering... Jean-Marc
> > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------\
> \
> > \
> > > \
> > > > ------------------
> > > > --- In anthroposophy@yahoogroups.com, Robert Mason
> robertsmason_99@
> > > > wrote:
> > > > >
> > > > > To Jean-Marc, who wrote:
> > > > >
> > > > > >>Whether a frogman falls into a pond --- on
> > > > > his own initiative, in the daylight, or
> > > > > inadvertently, sleepwalking by moonlight ---
> > > > > the frogman's state of consciousness at the
> > > > > moment of impact will have no influence
> > > > > whatsoever on the objective phenomenon: the
> > > > > surface of the water will be disturbed for a
> > > > > certain time [only] according to the laws of
> > > > > physics [fluid mechanics].<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > Well, yes; but the physics of the water is not
> > > > > the same as the "physics" of free human
> > > > > spirits. (But, to get really picky, I note
> > > > > that you are assuming no psycho-kinetic effects
> > > > > on the water from the frogman. Physics
> > > > > nowadays cannot safely make such assumptions on
> > > > > the quantum-mechanical level, and maybe not
> > > > > even on the "macro" level.)
> > > > >
> > > > > Jean-Marc wrote:
> > > > >
> > > > > >>Likewise, it seems that the phenomena Rudolf
> > > > > Steiner is referring to [the social *seeds* of
> > > > > all sorts] comply with a spiritual law that we
> > > > > are usually not aware of --- and that, from
> > > > > this specific perspective, our state of
> > > > > consciousness has no bearing on the phenomena
> > > > > and their duration whatsoever. . . . .<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > >-M wrote:
> > > > >
> > > > > >>In other words, Steiner is not saying that "a
> > > > > pair of shoes" will last 33 years, let alone a
> > > > > whole century :-) (Nor is he implying that
> > > > > philosophical or anthroposophical books will
> > > > > lose all spiritual significance beyond a
> > > > > hundred-year deadline.) He's saying, it seems
> > > > > to me, that the spiritual impulses that bring
> > > > > about the social [read Christic...] *seeds*
> > > > > need to be renewed every 100 years.
> > > > >
> > > > > >>From this perspective --- is it really
> > > > > surprising that [according to Steiner]
> > > > > Christian Rosenkreutz, and the Bodhisattva, and
> > > > > Master Jesus, are all reincarnated --- in every
> > > > > century?<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > OK; I wouldn't argue too much with that
> > > > > thought. The 100-year "law" is effective and
> > > > > is "used" by the Guiding Spirits because large-
> > > > > scale socio-historical evolution is lived-out
> > > > > by mankind almost wholly in an unfree, "dream"
> > > > > consciousness.
> > > > >
> > > > > J-M wrote:
> > > > >
> > > > > >>Assuming that some of you do enjoy
> > > > > "scratching your heads":
> > > > >
> > > > > >>"So we see a Being passing through history
> > > > > for whom a century is a year; evolving in
> > > > > accordance with Sun-laws though one is not
> > > > > aware of it." [GA 161 - January 10, 1915]<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > This "Being" is, as RS explains, the Being of
> > > > > Philosophy. And, I think, this Being evolves
> > > > > in accordance with "laws" (from the "outside").
> > > > > Why? -- Because even She is not a wholly free
> > > > > being; She is a created Being as well. RS
> > > > > perhaps implicitly alludes to this fact in the
> > > > > very next paragraph:
> > > > >
> > > > > "And then only there lies further back another
> > > > > Being still more supersensible."
> > > > >
> > > > > J-M wrote:
> > > > >
> > > > > >>From this standpoint, let's consider the
> > > > > Mystery of Golgotha as the central event in the
> > > > > spiritual / historical evolution of Mankind.
> > > > > What does that mean? It means that Mankind was
> > > > > born again, was born anew --- thanks to the
> > > > > Mystery of Golgotha. Or, perhaps in the strict
> > > > > spiritual sense: *Humanity* [i.e., self-
> > > > > consciousness!] was actually born on the
> > > > > Golgotha!...
> > > > >
> > > > > >>Well, assuming that this new Humanity, this
> > > > > Christ-bearing Humanity, this Sun-bearing
> > > > > Mankind, evolves in accordance with Sun-laws --
> > > > > - then the 20th and 21st centuries would
> > > > > witness the emancipation of Mankind, the birth
> > > > > of its *I*, the birth of the *Ego-body* of the
> > > > > new Humanity...<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > You might be interested: Terry Boardman has
> > > > > written on this theme; for instance:
> > > > >
> > > > > "The 21st century . . . .." (from "The
> > > > > China - America Relationship in the 21st Century
> > > > > and the Spectres of 1776 (2)")
> > > > > <>
> > > > >
> > > > > J-M wrote:
> > > > >
> > > > > >>In this regard, when did this new Humanity
> > > > > reach its 20th year? On its 19th birthday, in
> > > > > the year 1933! [date of birth = 33 AD]
> > > > >
> > > > > >>Yes, this is exactly the time when the
> > > > > *Second Coming* of Christ, when the
> > > > > reappearance of *Christ in the Etheric* was
> > > > > expected to begin . . . .<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > Why 1933? -- Willy Sucher suggested that two
> > > > > 950-year cycles of the precession of the nodes
> > > > > of Saturn were needed; one for expansion and
> > > > > one for contraction of the ethereal body of
> > > > > Christ.
> > > > >
> > > > > Gennady Bondarev tried to approach an
> > > > > understanding of the "Second Coming" through a
> > > > > Goethean perception of metamorphosis. He
> > > > > considers the Seven Deeds of Christ; RS told us
> > > > > of the three pre-earthly deeds; the Incarnation
> > > > > was the fourth. The fifth was the Ascension:
> > > > >
> > > > > "The Mystery of Golgatha was the stage of the
> > > > > Holy Divine metamorphosis through which God, as
> > > > > He passed through them, united with all the
> > > > > kingdoms of nature. The act of this union is
> > > > > given to religious consciousness I the festival
> > > > > of 'Ascension'. . . ."
> > > > >
> > > > > "The process of the Ascension lasted exactly 19
> > > > > centuries -- from 33 to 1933 AD. This is the
> > > > > length of time it took for God to unite fully
> > > > > (though not in the body of Jesus) with the
> > > > > world of the physical, i.e. of spatio-temporal
> > > > > being. (The number of time is 12; the number
> > > > > of life, of metamorphosis in the three-
> > > > > dimensional space is 7. Christ needed 3 years
> > > > > in order to unite with the body of Jesus; there
> > > > > rules in man the principle of the trinity.)
> > > > >
> > > > > "The world has now entered a stage of
> > > > > development that corresponds to Whitsun, the
> > > > > festival when the Holy Spirit descends to those
> > > > > human beings who possess and individual 'I" . .
> > > > > . ."
> > > > >
> > > > > "[The] seventh deed of Christ will allow man to
> > > > > partake in the body of resurrection. . . ."
> > > > >
> > > > > "Anthroposophy as the message of the Holy
> > > > > Spirit, the 'Comforter', the 'Spirit of truth'
> > > > > whom Christ Himself promised to send to us, and
> > > > > kept His promise, allows the human being to
> > > > > approach the experience of Christianity in the
> > > > > spirit of the Whitsun Festival." (from *The
> > > > > Crisis of Civilization*)
> > > > >
> > > > > J-M wrote:
> > > > >
> > > > > >> --- and also exactly the time when Adolf
> > > > > Hitler became chancellor. No wonder that the
> > > > > Sun Demon, according to Steiner, would manifest
> > > > > itself in the year 1933 . . . .<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > >-M wrote:
> > > > >
> > > > > >>Rudolf Steiner also said that shortly after
> > > > > the year 2000, some kind of law, not a law in
> > > > > the strict sense of the word, but something
> > > > > that will have a similar effect, will come from
> > > > > America: its aim will be to ban people --- from
> > > > > thinking *individually*! Anti-Christ in all his
> > > > > glory...<<
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > I have the quote, picked up somewhere on the
> > > > > WWW, probably from Rick Distasi:
> > > > >
> > > > > Steiner said: a beginning achieved in
> > > > > this direction of suppressing all individual
> > > > > thinking into pure materialistic thinking where
> > > > > one does not need to work upon the soul but on
> > > > > the basis of external experiments, and the
> > > > > human being is handled as if he were a
> > > > > machine...
> > > > > ...." (4 Apr. 1916, in:
> > > > > Things of Past and Present in the Spirit of
> > > > > Man, unpubl. typescript)
> > > > >
> > > > > J-M wrote:
> > > > >
> > > > > >>What is globalization? Notwithstanding the
> > > > > globalization of free market economy [and the
> > > > > Anglo- American (occult) intention to
> > > > > economically enslave a major part of the world
> > > > > and its population - which is nothing else but
> > > > > Black Magic on a nationalistic or even perhaps
> > > > > a racist level (the race of the omnipotent
> > > > > Lords and the race of the Slaves that will work
> > > > > to enrich their Lords)] --- it seems to me
> > > > > that, from a spiritual perspective,
> > > > > globalization is the symptom of Mankind
> > > > > becoming aware of itself as an entity, as a
> > > > > spiritual in-dividuality (i.e., the symptom of
> > > > > a [Christic] consciousness of spiritual unity,
> > > > > more or less intinctively rising above and
> > > > > beyond the differentiations from the past, such
> > > > > as peoples and races).
> > > > >
> > > > > Robert writes:
> > > > >
> > > > > OK; I pretty much go along with that, in
> > > > > general. Some kind of "globalization" must
> > > > > come, eventually, and it doesn't have to be a
> > > > > bad thing. It's "bad" when it is in the hands
> > > > > of the transnational mega-corporations and is
> > > > > enforced by the military power of the Shadow
> > > > > Government of the US and her allies, all in the
> > > > > service of the power-occultists. -- But in
> > > > > another sense, in a roundabout way, even this
> > > > > "bad" is good, assuming that the higher, good
> > > > > Gods are ultimately in control and allow the
> > > > > Adversarial Spirits and their minions to
> > > > > exercise power only in accordance within the
> > > > > overriding constraints of a wise and loving
> > > > > karma. "All things work together for good . .
> > > > > . ." said Paul.
> > > > >
> > > > > That thought might be somewhat comforting, but
> > > > > humanly it's kind of hard to feel much comfort
> > > > > if you're personally getting it in the neck
> > > > > from the rapacious, demonic economic
> > > > > oppressors. That roundabout "good" is really
> > > > > doing it the hard way; I'd rather do it the
> > > > > easy way.
> > > > >
> > > > > Robert M
> > > > >
> > > > >
> > > > >
> > > > > __________________________________________________
> > > > > Do You Yahoo!?
> > > > > Tired of spam? Yahoo! Mail has the best spam protection around
> > > > >
> > > > >
> > > >
> > >
> >
> | https://groups.yahoo.com/neo/groups/anthroposophy/conversations/topics/14689?l=1 | CC-MAIN-2015-35 | refinedweb | 6,495 | 56.59 |
python-dev Summary for 2006-08-01 through 2006-08-15
Contents
- Summaries
- Mixing str and unicode dict keys
- Rounding floats to ints
- Assigning to function calls
- PEP 357: Integer clipping and __index__
- OpenSSL and Windows binaries
- Type of range object members
- Distutils version number
- Dict containment and unhashable items
- Returning longs from __hash__()
- instancemethod builtin
- Unicode versions and unicodedata
- Elementtree and Namespaces
- Previous Summaries
- Skipped Threads
- Epilogue
[The HTML version of this Summary is available at]
Summaries
Mixing str and unicode dict keys
Ralf Schmitt noted that in Python head, inserting str and unicode keys to the same dictionary would sometimes raise UnicodeDecodeErrors:
>>> d = {} >>> d[u'm\xe1s'] = 1 >>> d['m\xe1s'] = 1 Traceback (most recent call last): ... UnicodeDecodeError: 'ascii' codec can't decode byte 0xe1 in position 1: ordinal not in range(128)
This error showed up as a result of Armin Rigo's patch to stop dict lookup from hiding exceptions, which meant that the UnicodeDecodeError raised when a str object is compared to a non-ASCII unicode object was no longer silenced. In the end, people agreed that UnicodeDecodeError should not be raised for equality comparisons, and in general, __eq__() methods should not raise exceptions. But comparing str and unicode objects is often a programming error, so in addition to just returning False, equality comparisons on str and non-ASCII unicode now issues a warning with the UnicodeDecodeError message.
Contributing threads:
Rounding floats to ints
Bob Ippolito pointed out a long-standing bug in the struct module where floats were automatically converted to ints. Michael Urman showed a simple case that would provoke an exception if the bug were fixed:
pack('>H', round(value * 32768))
The source of this bug is the expectation that round() returns an int, when it actually returns a float. There was then some discussion about splitting the round functionality into two functions: __builtin__.round() which would round floats to ints, and math.round() which would round floats to floats. There was also some discussion about the optional argument to round() which currently specifies the number of decimal places to round to -- a number of folks felt that it was a mistake to round to decimal places when a float can only truly reflect binary places.
In the end, there were no definite conclusions about the future of round(), but it seemed like the discussion might be resumed on the Python 3000 list.
Contributing threads:
Assigning to function calls
Neal Becker proposed that code by X() += 2 be allowed so that you could call __iadd__ on objects immediately after creation. People pointed out that allowing augmented assignment is misleading when no assignment can occur, and it would be better just to call the method directly, e.g. X().__iadd__(2).
Contributing threads:
PEP 357: Integer clipping and __index__
After some further discussion on the __index__ issue of last fortnight, Travis E. Oliphant proposed a patch for __index__ that introduced three new C API functions:
- PyIndex_Check(obj) -- checks for nb_index
- PyObject* PyNumber_Index(obj) -- calls nb_index if possible or raises a TypeError
- Py_ssize_t PyNumber_AsSsize_t(obj, err) -- converts the object to a Py_ssize_t, raising err on overflow
After a few minor edits, this patch was checked in.
Contributing threads:
- Bad interaction of __index__ and sequence repeat
- __index__ clipping
- Fwd: [Python-checkins] r51236 - in python/trunk: Doc/api/abstract.tex Include/abstract.h Include/object.h Lib/test/test_index.py Misc/NEWS Modules/arraymodule.c Modules/mmapmodule.c Modules/operator.c Objects/abstract.c Objects/classobject.c Objects/
- Fwd: [Python-checkins] r51236 - in python/trunk: Doc/api/abstract.tex Include/abstract.h Include/object.h Lib/test/test_index.py Misc/NEWS Modules/arraymodule.c Modules/mmapmodule.c Modules/operator.c Objects/abstract.c Objects/class
OpenSSL and Windows binaries
Jim Jewett pointed out that a default build of OpenSSL includes the patented IDEA cipher, and asked whether that needed to be kept out of the Windows binary versions. There was some concern about dropping a feature, but Gregory P. Smith pointed out that IDEA isn't directly exposed to any Python user, and suggested that IDEA should never be required by any sane SSL connection. Martin v. Löwis promised to look into making the change.
Update: The change was checked in before 2.5 was released.
Contributing threads:
Type of range object members
Alexander Belopolsky proposed making the members of the range() object use Py_ssize_t instead of C longs. Guido indicated that this was basically wasted effort -- in the long run, the members should be PyObject* so that they can handle Python longs correctly, so converting them to Py_ssize_t would be an intermediate step that wouldn't help in the transition.
There was then some discussion about the int and long types in Python 3000, with Guido suggesting two separate implementations that would be mostly hidden at the Python level.
Contributing thread:
Distutils version number
A user noted that Python 2.4.3 shipped with distutils 2.4.1 and the version number of distutils in the repository was only 2.4.0 and requested that Python 2.5 include the newer distutils. In fact, the newest distutils was already the one in the repository but the version number had not been appropriately bumped. For a short while, the distutils number was automatically generated from the Python one, but Marc-Andre Lemburg volunteered to manually bump it so that it would be easier to use the SVN distutils with a different Python version.
Contributing threads:
Dict containment and unhashable items
tomer filiba suggested that dict.__contain__ should return False instead of raising a TypeError in situations like:
>>> a={1:2, 3:4} >>> [] in a Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: list objects are unhashable
Guido suggested that swallowing the TypeError here would be a mistake as it would also swallow any TypeErrors produced by faulty __hash__() methods.
Contributing threads:
Returning longs from __hash__()
Armin Rigo pointed out that Python 2.5's change that allows id() to return ints or longs would have caused some breakage for custom hash functions like:
def __hash__(self): return id(self)
Though it has long been documented that the result of id() is not suitable as a hash value, code like this is apparently common. So Martin v. Löwis and Armin arranged for PyLong_Type.tp_hash to be called in the code for hash().
Contributing thread:
instancemethod builtin
Nick Coghlan suggested adding an instancemethod() builtin along the lines of staticmethod() and classmethod() which would allow arbitrary callables to act more like functions. In particular, Nick was considering code like:
class C(object): method = some_callable
Currently, if some_callable did not define the __get__() method, C().method would not bind the C instance as the first argument. By introducing instancemethod(), this problem could be solved like:
class C(object): method = instancemethod(some_callable)
There wasn't much of a reaction one way or another, so it looked like the idea would at least temporarily be shelved.
Contributing thread:
Unicode versions and unicodedata
Armin Ronacher noted that Python 2.5 implements Unicode 4.1 but while a ucd_3_2_0 object is available (implementing Unicode 3.2), no ucd_4_1_0 object is available. Martin v. Löwis explained that the ucd_3_2_0 object is only available because IDNA needs it, and that there are no current plans to expose any other Unicode versions (and that ucd_3_2_0 may go away when IDNA no longer needs it).
Contributing thread:
Elementtree and Namespaces
Elements (and attributes) can be associated with a namespace, such as
The xmlns attribute creates a "prefix" (alias) for a namespace, so that you can abbreviate the above as
xml:id
ElementTree treats the prefix as a just an aid to human readers, and creates its own abbreviations that are consistent throughout a document. Some tools (including w3 recommendations for canonicalization) treat the prefix itself as meaningful.
Elementtree may support this in version 1.3, but it wasn't going to be there in time for 2.5, and it wasn't judged important enough to keep etree out of the release.
If you need it sooner, then supports the etree API and does retain prefixes.
Contributing thread:
[Thanks to Jim Jewett for this summary.]
Skipped Threads
- clock_gettime() vs. gettimeofday()?
- Strange memo behavior from cPickle
- internal weakref API should be Py_ssize_t?
- Weekly Python Patch/Bug Summary
- Releasemanager, please approve #1532975
- FW: using globals
- TRUNK FREEZE 2006-07-03, 00:00 UTC for 2.5b3
- segmentation fault in Python 2.5b3 (trunk:51066)
- using globals
- uuid module - byte order issue
- RELEASED Python 2.5 (beta 3)
- TRUNK is UNFROZEN
- 2.5 status
- Python 2.5b3 and AIX 4.3 - It Works
- More tracker demos online
- need an SSH key removed
- BZ2File.writelines should raise more meaningful exceptions
- test_mailbox on Cygwin
- cgi.FieldStorage DOS (sf bug #1112549)
- 2.5b3, commit r46372 regressed PEP 302 machinery (sf not letting me post)
- free(): invalid pointer
- should i put this on the bug tracker ?
- Is this a bug?
- httplib and bad response chunking
- cgi DoS attack
- DRAFT: python-dev summary for 2006-07-01 to 2006-07-15
- SimpleXMLWriter missing from elementtree
- DRAFT: python-dev summary for 2006-07-16 to 2006-07-31
- Is module clearing still necessary? [Re: Is this a bug?]
- PyThreadState_SetAsyncExc bug?
- Errors after running make test
- What is the status of file.readinto?
- Recent logging spew
- [Python-3000] Python 2.5 release schedule (was: threading, part 2)
- test_socketserver failure on cygwin
- ANN: byteplay - a bytecode assembler/disassembler
- Arlington VA sprint on Sept. 23
- IDLE patches - bugfix or not?
- Four issue trackers submitted for Infrastructue Committee's tracker search
Epilogue
This is a summary of traffic on the python-dev mailing list from August 01, 2006 through August 10. | http://www.python.org/dev/summary/2006-08-01_2006-08-15/ | crawl-002 | refinedweb | 1,613 | 52.49 |
How to Import Excel Data Into Python Scripts Using Pandas
Microsoft Excel is the most widely-used spreadsheet software in the world, and for good reason: the user-friendly interface and powerful built-in tools make it simple to work with data.
But if you want to do more advanced data processing, you’ll need to go beyond Excel’s capabilities and start using a scripting/programming language like Python. Rather than manually copying your data into databases, here’s a quick tutorial on how to load your Excel data into Python using Pandas.
Note: If you’ve never used Python before, this tutorial may be a tad difficult. We recommend starting with these websites for learning Python The 5 Best Websites to Learn Python Programming Want to learn Python programming? Here are the best ways to learn Python online, many of which are entirely free. Read More and these basic Python examples to get you started 10 Basic Python Examples That Will Help You Learn Fast This article of basic python examples is for those who already have some programming experience and simply want to transition to Python as quickly as possible. Read More .
What Is Pandas?
Python Data Analysis Library (“Pandas”) is an open-source library for the Python programming language that’s used for data analysis and data manipulation.
Pandas loads data into Python objects known as Dataframes, which store data in rows and columns just like a traditional database. Once a Dataframe is created it can be manipulated using Python, opening up a world of possibilities.
Installing Pandas
Note: You must have Python 2.7 or later to install Pandas.
To begin working with Pandas on your machine you will need to import the Pandas library. If you’re in search of a heavyweight solution you can download the Anaconda Python Distribution, which has Pandas built-in. If you don’t have a use for Anaconda, Pandas is simple to install in your terminal.
Pandas is a PyPI package, which means you can install using PIP for Python via the command line. Modern Mac systems come with PIP. For other Windows, Linux, and older systems it’s easy to learn how to install PIP for Python How to Install Python PIP on Windows, Mac, and Linux Many Python developers rely on a tool called PIP for Python to streamline development. Here's how to install Python PIP. Read More .
Once you’ve opened your terminal, the latest version of Pandas can be installed using the command:
>> pip install pandas
Pandas also requires the NumPy library, let’s also install this on the command line:
>> pip install numpy
You now have Pandas installed and ready to create your first DataFrame!
Prepping the Excel Data
For this example, let’s use a sample data set: an Excel workbook titled Cars.xlsx.
This data set displays the make, model, color, and year of cars entered into the table. The table is displayed as an Excel range. Pandas is smart enough to read the data appropriately.
This workbook is saved to the Desktop directory, here is the file path used:
/Users/grant/Desktop/Cars.xlsx
You will need to know the file path of the workbook to utilize Pandas. Let’s begin by opening up Visual Studio Code to write the script. If you don’t have a text editor, we recommend either Visual Studio Code or Atom Editor Visual Studio Code vs. Atom: Which Text Editor Is Right for You? Looking for a free and open-source code editor? Visual Studio Code and Atom are the two strongest candidates. Read More .
Writing the Python Script
Now that you have your text editor of choice, the real fun begins. We’re going to bring together Python and our Cars workbook to create a Pandas DataFrame.
Importing the Python Libraries
Open your text editor and create a new Python file. Let’s call it Script.py.
In order to work with Pandas in your script, you will need to import it into your code. This is done with one line of code:
import pandas as pd
Here we are loading the Pandas library and attaching it to a variable “pd”. You can use any name you would like, we are using “pd” as short for Pandas.
To work with Excel using Pandas, you need an additional object named ExcelFile. ExcelFile is built into the Pandas ecosystem, so you import directly from Pandas:
from pandas import ExcelFile
Working With the File Path
In order to give Pandas access to your workbook, you need to direct your script to the location of the file. The easiest way to do this is by providing your script with the full path to the workbook.
Recall our path in this example: /Users/grant/Desktop/Cars.xlsx
You will need this file path referenced in your script to extract the data. Rather than referencing the path inside of the Read_Excel function, keep code clean by storing the path in a variable:
Cars_Path = '/Users/grant/Desktop/Cars.xlsx'
You are now ready to extract the data using a Pandas function!
Extract Excel Data Using Pandas.Read_Excel()
With Pandas imported and your path variable set, you can now utilize functions in the Pandas object to accomplish our task.
The function you will need to use is appropriately named Read_Excel. The Read_Excel function takes the file path of an Excel Workbook and returns a DataFrame object with the contents of the Workbook. Pandas codes this function as:
pandas.read_excel(path)
The “path” argument is going to be the path to our Cars.xlsx workbook, and we have already set the path string to the variable Cars_Path.
You’re ready to create the DataFrame object! Let’s put it all together and set the DataFrame object to a variable named “DF”:
DF = pd.read_excel(Cars_Path)
Lastly, you want to view the DataFrame so let’s print the result. Add a print statement to the end of your script, using the DataFrame variable as the argument:
print(DF)
Time to run the script in your terminal!
Running the Python Script
Open your terminal or command line, and navigate to the directory which houses your script. In this case, I have “Script.py” located on the desktop. To execute the script, use the python command followed by the script file:
Python will pull the data from “Cars.xlsx” into your new DataFrame, and print the DataFrame to the terminal!
A Closer Look at the DataFrame Object
At first glance, the DataFrame looks very similar to a regular Excel table. Pandas DataFrames are easy to interpret as a result.
Your headers are labeled at the top of the data set, and Python has filled in the rows with all your information read from the “Cars.xlsx” workbook.
Notice the leftmost column, an index starting at 0 and numbering the columns. Pandas will apply this index to your DataFrame by default, which can be useful in some cases. If you do not want this index generated, you can add an additional argument into your code:
DF = pd.read_excel(Cars_Path, index=False)
Setting the argument “index” to False will remove the index column, leaving you with just your Excel data.
Doing More With Python
Now that you have the ability to read data from Excel worksheets, you can apply Python programming any way you choose. Working with Pandas is a simple way for experienced Python programmers to work with data stored in Excel Workbooks.
The ease with which Python can be used to analyze and manipulate data is one of the many reasons why Python is the programming language of the future 6 Reasons Why Python Is the Programming Language of the Future Want to learn or expand your programming skills? Here's why Python is the best programming language to learn this year. Read More .
Image Credit: Rawpixel/Depositphotos
Related topics: Data Analysis, Microsoft Excel, Python, Scripting.
Affiliate Disclosure: By buying the products we recommend, you help keep the site alive. Read more.
Index=False is not working
Still we can see the
0
1
2
.
.
7 in results | https://www.makeuseof.com/tag/import-excel-data-python-scripts-pandas/ | CC-MAIN-2020-29 | refinedweb | 1,354 | 62.88 |
Oh, looks like now I see why on adults dataset this difference in cont and cats is not so evident. Lets look at every cat feature, each of these can be just 2-3 different values (especially considering that age is cont here) and are represented in the model with only 2-3 float (as embedding size is
def emb_sz_rule(n_cat:int)->int: return min(600, round(1.6 * n_cat**0.56)) where n_cat is caardinality ). And in Rossmann data Store feature for ex. can be one of thousands different values and it is represented with 70+ floats per one storeName/Id
As for your notebook it’s hard for me to judge as I’ve used some of different approach, but everything looks fine. I’m not thogh sure that passing procs as a parameter works as it seems to. I remember that I had to recreate procs one by one, but I don’t really remember why. Maybe it was a bug I had to overcome, or maybe I just couldnot find the right way | https://forums.fast.ai/t/feature-importance-in-deep-learning/42026/54 | CC-MAIN-2020-05 | refinedweb | 177 | 67.79 |
Programming
Need E-Books - JSP-Servlet
Need E-Books Please i need a E-Book tutorial on PHP
please kindly send this to me
thanks
Programming
Thanks
import java.io.FileOutputStream;
import java.io.PrintStream
EJB Books
EJB Books
Professional
EJB Books
Written... the genius is in the
details-more so than with most programming topics
Java Script Programming Books
Java Script
Programming Books
... tools such as HTML have been joined by true programming
languages-including JavaScript.Now don't let the word "programming" scare you. For many
Programming in JDBC and JSP - JSP-Servlet
Programming in JDBC and JSP Write a program using JDBC and JSP... :
Books List
S.No
Student Name...
Thanks
Java XML Books
number of Java programming books on the market. This presents potential Java book... Programming, published by Wrox is one of the few XML books I have found which...
Java XML Books
Java Programming
Java Programming Hi,
What is Java Programming? How I can learn Java Programming in one month?
Thanks
Java Reference Books
.
Java network programming books.... In this review, I'll examine a crop of books that want to be your Java network programming...
Java Reference
Books
Free PHP Books
Free PHP Books
PHP
5 Power Programming
In this book, PHP 5's co-creator and two... it is at now, and the reasons for programming with it. Towards the end of the chapter
thanks - JSP-Servlet
thanks thanks sir i am getting an output...Once again thanks for help
thanks - JSP-Servlet
thanks thanks sir its working
jsp programming
converting a pdf file to png format using pdftron JSP programming Hello,
I am trouble in converting a pdf file to png format using pdftron.
plz guide me, n
Java Beans Books
Java Beans Books
... be used in graphical programming environments, like Borland's JBuilder, or IBM's... writing any Java code
- in fact, without doing any programming at all
object oriented programming protocol
object oriented programming protocol What is object oriented programming protocol? Is it different from objective c protocol programming??
Thanks
Web Sphere Books
Programming Books
MC Press is proud to offer the widest selection...;
Enterprise
Java Programming With IBM WebSphere Books...
Web Sphere Books
error at programming
){
System.out.println(e);
}
}
}
For cookies,
Visit Here
Thanks
What is attribute oriented programming?
What is attribute oriented programming? Hi,
What is attribute oriented programming?
Thanks
thanks - Java Beginners
Top Programming Languages of 2013
Top Programming Languages of 2013 Hi,
What will be the Top Programming Languages of 2013?
Thanks
Hi,
I thing in the year 2013 top programming language will be:
C#
.NET
Java
C Programming Language
Java programming
Free JSP download Books
Free JSP download Books
Java Servlets and JSP
free download books... you can learn Java web programming using this book's fresh approach, real-world
Java Socket Programming
Java Socket Programming Hi,
What is Java Socket Programming? Can anyone give the examples of Java Socket Programming?
Thanks
Hi,
Please see the tutorial:Overview of Networking through JAVA
Thanks
Java Certification Books
Java Certification Books
... programming skills and provides the IT industry wth a standard to use when... readers for the CJPE by teaching them sound Java programming skills and covering
java programming problem - JDBC
java programming problem Hi,
Request you to provide the source code in Java for the following programming problem :
upload .csv file data... or raichallp@gmail.com
Thanks & Regards
Raichal
Free JSP Books
Free JSP Books
... is required, but the reader is assumed to be familiar with the Java programming... programming with JSP pages and servlets, the design and implementation of tag libraries
Free JSP Books
Free JSP Books
Download the following JSP books... when your application requires a lot of real programming
to accomplish its help
How to learn programming free?
How to learn programming free? Is there any tutorial for learning Java absolutely free?
Thanks
Hi,
There are many tutorials on Java programming on RoseIndia.net which you can learn free of cost.
These are:
Hi
Black Berry Programming - MobileApplications
programming knowledge in j2me please provide some help thanks in advance
programming error - Java Beginners
programming error how can we statically implement linked list???????? Hi Friend,
Please visit the following links: | http://www.roseindia.net/tutorialhelp/comment/92061 | CC-MAIN-2014-52 | refinedweb | 698 | 57.98 |
Hi All,
Assume you have a case that you need to perform a condition check and then add string based on that condition.
Usually we'll (I did) do nested if else conditions. But recently I have came across a code this will avoid multiple if else statements.
Example:
With Nested IF conditions
if (app.documents.length == 1){ var inddDoc = app.documents.item(0); } else{ if (app.documents.length == 0){ alert("Please open a document"); } else{ alert("More than one document is opened\rPlease close other documents"); } }
Effective way (according to me)
if (app.documents.length == 1){ var inddDoc = app.documents.item(0); } else{ alert(app.documents.length == 0?"Please open a document":"More than one document is opened\rPlease close other documents"); }
Your sugesstions are welcome, and please do share the similar kind of techniques if anyone have.
Green4ever
Message was edited by: Green4ever
The (condition) ? (if true) : (if false) construction is quite powerful, but it sometimes makes it easy to write unmanageable code. If, for example, you start with two conditions, you can use it as you show here. If you have to add a third option, you can even expand it:
(condition¹) ? (true¹) : ((condition²) ? (true²) : (false));
and so on (yeah I find myself doing this at times). If your scripts are meant to be read by someone else, Clarity Rules! And "someone else" may very well be you -- six months later.
Note that nested if/else also tend to get confusing after the third or fourth or so. If you are performing actions based on the number of items, and it's either 1 or 2 (or 0 or 1), you can safely use an if/else. For anything more, you should use a switch:
switch (app.documents.length)
{
case 0: alert ("none"); break;
case 1: okayGoAhead (); break;
default: alert ("two or more"); break;
}
That way, you can easily add 'case 2', 'case 3' etc., or remove 'case 0' if your function can work with it after all.
The SDK way is like this:
do{
if(condition1 == false)
break;
if(condition2 == false)
break;
if(condition3 == false)
break;
if(condition4 == false)
break;
// all conditions passed
doYourStuff()
}while(false)
Harbs
To me switch/case is usually the way to go when you have to process various conditions. The code is cleaner and more flexible, I think.
In very specific cases, however, you can directly select a choice—or an action—through a litteral Array.
For example:
alert( ["No document!", "OK (1 document)", "More than one document."][Math.min(2,app.documents.length)] );
or:
[myFunc0, myFunc1, myFuncMore][Math.min(2,app.documents.length)]();
@+
Marc
Hi All,
It is quite interesting and useful to read all the replies.
Jongware: Yeah, I agree that switch case is more friendly when more than 3 conditions are there. Thanks for your reply.
Harbs: Thanks for sharing your new idea. But I think SDK way is achieved by the simple if loop as follows
if((condition1 == true)&&(condition2 == true)&&(condition3 == true)&&(condition4 == true)){ // all conditions passed doYourStuff() }
In what way that SDK method is useful. Can you please give an example. Am I making some sense? If not I apologize.
Marc: Selecting a choice from an array is awesome. Really a great idea though someone may already knew.
And here is the simple example to show the implementation of what I said,
function getDate_Time() { var dTime = new Date(); var hours = dTime.getHours(); var minute = dTime.getMinutes(); var period = ((hours > 12)?"PM":"AM"); hours = ((hours > 12) ? hours - 12 : hours) var today = new Date(); var dd = today.getDate(); var mm = today.getMonth()+1; //January is 0! var yyyy = today.getFullYear(); if(dd<10){ dd='0'+dd } if(mm<10){ mm='0'+mm } var time_Day = mm+'/'+dd+'/'+yyyy +" @ "+hours + ":" + minute + " " + period return time_Day; }
Thanks,
Green4ever
You can shorten it more:
function getDate_Time() { var dTime = new Date(), hours = dTime.getHours(), minute = dTime.getMinutes(), period = ((hours > 12)?"PM":"AM"), dd = dTime.getDate(), mm = dTime.getMonth()+1, yyyy = dTime.getFullYear(); hours = ((hours > 12) ? hours - 12 : hours); dd<10?dd='0'+dd:0; mm<10?mm='0'+mm:0; var time_Day = mm+'/'+dd+'/'+yyyy+" @ "+hours+":"+minute+" "+period; return time_Day; }
--
Marijan (tomaxxi)
And even shorter:
// ... hours > 12 && (hours-=12); dd < 10 && (dd='0'+dd); mm < 10 && (mm='0'+mm); // ...
@+
Marc
If you have a lot of conditions and some of them could be complex, do/while/false is the flattest and easiest-to-read method. You find this code pattern a LOT in the InDesign SDK.
Harbs
Hi Marijan (tomaxxi),
I justed posted that example from one of my older scrap. Thanks for making it much simpler.
Hi Marc,
I'll give a try on which you posted. Thanks.
Hi Harbs,
I am very much eager to learn to SDK. Is there any free debugger available for SDK development. Can you suggest me from where I should begin?
Thanks,
Green4ever
Until I started writing plugins I could not understand why do / while code was used. It really is the best method when you have conditions that rely on the success previous conditions.
P.
Umm, what language are we talking about here? It looks like ExtendScript.
If so, then when Harbs says:
If you have a lot of conditions and some of them could be complex, do/while/false is the flattest and easiest-to-read method. You find this code pattern a LOT in the InDesign SDK.
I really don't think so at all!
It's a pattern that is not seen much outside of C++ and of the SDK, is I really wouldn't suggest it in JavaScript.
Instead, I would just chain the ifs and cheat the indentation, like this:
if (app.documents.length == 1){ var inddDoc = app.documents.item(0); } else if (app.documents.length == 0){ alert("Please open a document"); } else { alert("More than one document is opened\rPlease close other documents"); }
You can have as many "else if" blocks as you like, before the final else.
It's easy to read, makes sense, and is a very-established pattern.
Also, of course, you should be using === not ==, unless you have a very good reason.
Sorry, I left out one of the strongest reasons to avoid do { if / break / if / break ... / } while (false).
It's easy to forget a 'break' and have your code go completely awry. It's much nicer when the language's syntax conventions encourage you to do the right thing.
Horses for courses.
If I have some conditions that cause a need to abort, I'll use if(condition)return; for each one. (no do/while/false)
If I need to do something different for each case, I'll usually use a switch.
switch (app.documents.length){
case 0:
alert("Please open a document");
break;
case 1:
var inddDoc = app.documents.item(0);
break;
default:
alert("More than one document is opened\rPlease close other documents");
break;
}
If things are complex and I have just a few conditions I'll use if/else.
If things are really complex, I'll use do/while/false.
For me, the most important aspect of code is readability. Whatever fits the situation...
Harbs
Hi All,
I'd like to share another simple but useful tip here. I said "useful" because JavaScript doesn't have string.format as an inbuilt function.
If you want to prepend zeros (zero padding) to your calculations use the following simple method,
var myValue1 = 1; var myValue2 = 25; myValue1 = ("000"+myValue1).slice(-3); myValue2 = ("000"+myValue2).slice(-3); alert("myValue1 = "+myValue1+"\rmyValue2 = "+myValue2);
Is there any other better way to achieve this.
Green4ever
Green4ever:
var myValue1 = 1; myValue1 = ("000"+myValue1).slice(-3);
Is there any other better way to achieve this.
This code is not good for several reasons!
Most importantly, it breaks for numbers bigger than 3 digits! That's horrible.
Even if it was unlikely to happen in the original application, when you advertise it like this it's going to be repurposed and someone is going to have a nasty surprise some day! Not good! It's rare that this sort of code needs to run in a tight loop where perfect optimization and performance are more important than correctness. Take the time to do it right!
Secondarily, you are using a single variable (myValue1) to hold two different kinds of objects, first an Number and later a String. That is usually not a good idea. It's not terrible, but it does tend to lead to confusion about what type the variable is at any given time, and string functions don't work well on numbers and vice versa. Much better to use two different variables.
I'd probably take the first answer from -leading-zeros-in-javascript
which suggests:
function pad(num, size) { var s = num+""; while (s.length < size) s = "0" + s; return s; }
There was a discussion on the forum a while back about this.
slice() is probably the best way to go.
You can generalize that with the following function (originally posted by Peter Kahrel):
alert("myValue1 = " + pad(myValue1,3)+"\rmyValue2 = "+ pad(myValue2,3)); function pad (num, len) { return ("0000000"+num).slice(-len); }
Harbs
Personally, I like this answer from that discussion:
function zfill(num, len) {return (Array(len).join("0") + num).slice(-len);}
I agree that ithinc's answer with zfill() appears the most elegant. My inclination is to avoid the low-voted answers that look good because oftentimes they have subtle problems. This one looks OK to me, though.
Of course, it's fairly disappointing to me that none of them properly handle negative numbers, or decimals, or NaN. Bah!
One probably doesn't care, but zfill() fails with big large numbers and small roundings, but pad() works ok:
[04:03:49.850] pad(Number.MAX_VALUE,5)
[04:03:49.852] "1.7976931348623157e+308"
[04:03:54.122] zfill(Number.MAX_VALUE,5)
[04:03:54.124] "e+308"
If you try hard enough, maybe you can find a case where zfill() works and pad() fails? Or at least where it is faster?
@Harbs – a bit dangerous.
One have to make sure that:
len !< num.toString().length;
Otherwise we possibly get truncated numbers on return.
Uwe
Hi John,
I agree that, the code I posted will only work for non negative 3 digit number. certainly it has its limitations. Need to learn the way, how to implement. Sorry for posting the example
with less clarity. I just showed a way to do that technique. Thanks for you suggestion.
Now, I Just made few change in the code that you posted, I hope it will now handle the negative numbers.
function pad(num, size) { if (num<0){var negative = true;} var s = (Math.abs(num))+""; while (s.length < size) s = "0" + s; if (negative){ s = "-"+s; } return s; }
Suggestions welcome!.....
Also It is applicable only for integer values.
Green4ever
Message was edited by: Green4ever
Good point.
Of course this is all pretty silly unless you are writing a library for public consuption.
For personal use, I'd just go with enough zeros as necessary and use a simple slice()...
North America
Europe, Middle East and Africa
Asia Pacific
South America | http://forums.adobe.com/message/4270953 | CC-MAIN-2013-20 | refinedweb | 1,854 | 67.65 |
Problem with an external assembly, what am I doing wrong?
I pushed to Mercurial on bitbucket, and it appears to be working with appharbor, but it's not finding the AjaxControlToolkit...
c:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1360,9): warning MSB3245: Could not resolve this reference. Could not locate the assembly "AjaxControlToolkit". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. Site.Master.designer.cs(40,27): error CS0400: The type or namespace name 'AjaxControlToolkit' could not be found in the global namespace (are you missing an assembly reference?) Done building project "Sunset.csproj" -- FAILED.
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by rune on 02 Jun, 2011 03:11 AM
Hi,
You may need to set the "Copy Local" property to "true" on your reference to AjaxControlToolkit - otherwise it may not be copied to the output directory.
This issues sometimes also occur if you have the assembly in a local "bin" folder.
This mean that it's available when VS tries to build the solution, but not when Msbuild builds it. A solution in that case would be to put the dll in another folder apart from your source code (a /lib folder for instance) and make sure your references uses that dll.
Let me know if this helps.
Best,
Rune
rune closed this discussion on 02 Jun, 2011 03:11 AM.
caderoux re-opened this discussion on 02 Jun, 2011 03:22 AM
2 Posted by caderoux on 02 Jun, 2011 03:22 AM
OK, I did the lib thing - and pushed the binary to Mercurial, too. Seems to get me at least a build.
caderoux closed this discussion on 02 Jun, 2011 03:23 AM.
rune re-opened this discussion on 02 Jun, 2011 03:31 AM
Support Staff 3 Posted by rune on 02 Jun, 2011 03:31 AM
Great, glad you got this to work.
Best,
Rune
rune closed this discussion on 02 Jun, 2011 03:31 AM. | http://support.appharbor.com/discussions/problems/615-problem-with-an-external-assembly-what-am-i-doing-wrong | CC-MAIN-2014-42 | refinedweb | 365 | 72.16 |
django-sortedm2m 0.9.5
Drop-in replacement for django's many to many field with sorted relations.
sortedm2m is a drop-in replacement for django’s own ManyToManyField. The provided SortedManyToManyField behaves like the original one but remembers the order of added relations.
Usecases
Imagine that you have a gallery model and a photo model. Usually you want a relation between these models so you can add multiple photos to one gallery but also want to be able to have the same photo on many galleries.
This is where you usually can use many to many relation. The downside is that django’s default implementation doesn’t provide a way to order the photos in the gallery. So you only have a random ordering which is not suitable in most cases.
You can work around this limitation by using the SortedManyToManyField provided by this package as drop in replacement for django’s ManyToManyField.
Usage
Use SortedManyToManyField like ManyToManyField in your models:
from django.db import models from sortedm2m.fields import SortedManyToManyField class Photo(models.Model): name = models.CharField(max_length=50) image = models.ImageField(upload_to='...') class Gallery(models.Model): name = models.CharField(max_length=50) photos = SortedManyToManyField(Photo)
If you use the relation in your code like the following, it will remember the order in which you have added photos to the gallery.
gallery = Gallery.objects.create(name='Photos ordered by name') for photo in Photo.objects.order_by('name'): gallery.photos.add(photo)
SortedManyToManyField
You can use the following arguments to modify the default behavior:
sorted
Default: True
You can set the sorted to False which will force the SortedManyToManyField in behaving like Django’s original ManyToManyField. No ordering will be performed on relation nor will the intermediate table have a database field for storing ordering information.
sort_value_field_name
Default: 'sort_value'
Specifies how the field is called in the intermediate database table by which the relationship is ordered. You can change its name if you have a legacy database that you need to integrate into your application. values.
Admin
SortedManyToManyField provides a custom widget which can be used to sort the selected items. It renders a list of checkboxes that can be sorted by drag’n’drop.
To use the widget in the admin you need to add sortedm2m to your INSTALLED_APPS settings, like:
INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admin', 'sortedm2m', '...', )
Otherwise it will not find the css and js files needed to sort by drag’n’drop.
Finally, make sure not to have the model listed in any filter_horizontal or filter_vertical tuples inside of your ModelAdmin definitions.
If you did it right, you’ll wind up with something like this:
It’s also possible to use the SortedManyToManyField with admin’s raw_id_fields option in the ModelAdmin definition. Add the name of the SortedManyToManyField to this list to get a simple text input field. The order in which the ids are entered into the input box is used to sort the items of the sorted m2m relation.
Example:
from django.contrib import admin class GalleryAdmin(admin.ModelAdmin): raw_id_fields = ('photos',)
Contribute
You can find the latest development version on github. Get there and fork it, file bugs or send me nice wishes..
0.6.0
- Python 3 support!
- Better widget. Thanks to Mike Knoop for the initial patch.
0.5.0
- Django 1.5 support. Thanks to Antti Kaihola for the patches.
- Dropping Django 1.3 support. Please use django-sortedm2m<0.5 if you need to use Django 1.3.
- Adding support for a sort_value_field_name argument in SortedManyToManyField. Thanks to Trey Hunner for the idea.
0.4.0
- Django 1.4 support. Thanks to Flavio Curella for the patch.
- south support is only enabled if south is actually in your INSTALLED_APPS setting. Thanks to tcmb for the report and Florian Ilgenfritz for the patch.
0.3.3
- South support (via monkeypatching, but anyway… it’s there!). Thanks to Chris Church for the patch. South migrations won’t pick up a changed sorted argument though.
0.3.2
- Use already included jQuery version in global scope and don’t override with django’s version. Thank you to Hendrik van der Linde for reporting this issue.
0.3.1
- Fixed packaging error.
0.3.0
- Heavy internal refactorings. These were necessary to solve a problem with SortedManyToManyField and a reference to 'self'.
0.2.5
- Forgot to exclude debug print/console.log statements from code. Sorry.
0.2.4
- Fixing problems with SortedCheckboxSelectMultiple widget, especially in admin where a “create and add another item” popup is available.
0.2.3
- Fixing issue with primary keys instead of model instances for .add() and .remove() methods in SortedRelatedManager.
0.2.2
- Fixing validation error for SortedCheckboxSelectMultiple. It caused errors if only one value was passed.
0.2.1
- Removed unnecessary reference of jquery ui css file in SortedCheckboxSelectMultiple. Thanks to Klaas van Schelven and Yuwei Yu for the hint.
0.2.0
- Added a widget for use in admin.
- Downloads (All Versions):
- 689 downloads in the last day
- 2580 downloads in the last week
- 9234.9.5.xml | https://pypi.python.org/pypi/django-sortedm2m/ | CC-MAIN-2015-18 | refinedweb | 858 | 51.24 |
Tutorials -
- Ubuntu 14.04 or later – get Ubuntu
--compiling.
- In Ubuntu SDK, press
Ctrl+Nto create a new project
- Select the Projects > Ubuntu > App with Simple UI template and click Choose…
- Give the project CurrencyConverter as a Name. You can leave the Create in: field as the default and then click Next.
- You can optionally set up a revision control system such as Bazaar in the final step, but that’s outside the scope of this tutorial. Click on Finish.
- Replace the Column component and all of its children, and replace them with the Page as shown below, and then save it with
Ctrl+S:
import QtQuick 2.4 import Ubuntu.Components 1.3 /*! \brief MainView with a Label and Button elements. */") } }
Try to run it now to see the results:
- Inside Ubuntu SDK, press the Ctrl+R key combination. It is a shortcut to the Build > Run menu entry
Or alternatively, from the terminal:
- Open a terminal with
Ctrl+Alt
property: namespace gesmes='';" +"declare default element namespace '';" query: "/gesmes:Envelope/Cube/Cube/Cube" onStatusChanged: { if (status === XmlListModel.Ready) { for (var i = 0; i < count; i++) currencies.append({"currency": get(i).currency, "rate": parseFloat(get(i).rate)}) } } XmlRole { name: "currency"; query: "@currency/string()" } XmlRole { name: "rate"; query: "@rate/string()" } }
The relevant properties are
source, to indicate the URL where the data will
be fetched from;
query, to specify an absolute
XPath query to use as the
base query for creating model items from the
XmlRoles below; and
namespaceDeclarations as the namespace declarations to be used in the XPath
queries.
The
onStatusChanged signal handler demonstrates another combination of
versatile features: the signal and handler system together with JavaScript.
Each QML property has got a
<property>Changed signal and its corresponding
on
<property>Changed signal handler. In this case, the
StatusChanged signal
will be emitted to notify of any changes of the status property, and we define
a handler to append all the currency/rate items to the
currencies ListModel
once
ratesFetcher has finished loading the data.
In summary,
ratesFetcher will be populated with currency/rate items, which
will then be appended to
currencies.
It is worth mentioning that in most cases we’d be able to use a single
XmlListModel as the data source, but in our case we use it as an intermediate
container. We need to modify the data to add the EUR currency, and we put the
result in the
currencies ListModel.
Notice how network access happens transparently so that you as a developer don’t have to even think about it!
Around line 66, let’s add an ActivityIndicator component to show activity while the rates are being fetched:
ActivityIndicator { objectName: "activityIndicator" anchors.right: parent.right running: ratesFetcher.status === XmlListModel.Loading }
We anchor it to the right of its parent (
root) and it will show activity
until the rates data has been fetched.
And finally, around line 32 (above and outside of the
Page), we add the
convert JavaScript function that will perform the actual currency conversions:
function convert(from, fromRateIndex, toRateIndex) { var fromRate = currencies.getRate(fromRateIndex); if (from.length <= 0 || fromRate <= 0.0) return ""; return currencies.getRate(toRateIndex) * (parseFloat(from) / fromRate); }
Choosing currencies
At this point we’ve added all the backend code and we move on to user interaction. We’ll start off with creating a new Component, a reusable block that is created by combining other components and objects.
Let’s first append two import statements at the top of the file, underneath the other import statements:
import Ubuntu.Components.ListItems 0.1 import Ubuntu.Components.Popups 1.3
And then add the following code around line 79:
Component { id: currencySelector Popover { Column { anchors { top: parent.top left: parent.left right: parent.right } height: pageLayout.height Header { id: header text: i18n.tr("Select currency") } ListView { clip: true width: parent.width height: parent.height - header.height model: currencies delegate: Standard { objectName: "popoverCurrencySelector" text: currency onClicked: { caller.currencyIndex = index caller.input.update() hide() } } } } } }
At this point, if you run the app, you will not yet see any visible changes, so don’t worry if all you see is an empty rectangle.
What we’ve done is to create the currency selector, based on a Popover and a standard Qt Quick ListView. The ListView will display the data from the
currencies ListMode. Notice how the Column object wraps the Header and the list view to arrange them vertically, and how each item in the list view will be a Standard list item component.
The popover will show the selection of currencies. Upon selection, the popover
will be hidden (see
onClicked signal) and the caller’s data is updated. We
assume that the caller has
currencyIndex and
input properties, and that
input is an item with an
update() function.
Arranging_2<<page_5<<
If you want to study the Component Showcase code:
- Start Ubuntu SDK by pressing the Ubuntu button in the Launcher. That will bring up the Dash.
- Start typing
ubuntu sdkmland click on Open.
- To run the code, you can select the Tools > External > Qt Quick > Qt Quick 2 Preview (qmlscene) menu entry.
Alternatively, if you only want to run the Component Showcase:
- Open the Dash
- Type
toolkit galleryand double click on the “Ubuntu Toolkit Gallery” result that appears to run it
Reference
- Code for this tutorial (use
bzr branch lp:ubuntu-sdk-tutorialsto! | https://docs.ubuntu.com/phone/en/apps/qml/tutorials-building-your-first-qml-app.html | CC-MAIN-2018-47 | refinedweb | 886 | 55.84 |
ReSharper C++ 2020.1 EAP: Rearrange Code, Code Completion, and UE4 Naming
With this EAP build, the well-known Rearrange code and Complete Statement features from ReSharper for .NET have finally come to ReSharper C++! Read on for details about these features, as well as other highlights of this build:
- Rearrange code: an easy way to move your code.
- Complete Statement: one shortcut instead of many routine actions.
- Completion in macro definitions: code completion now works in macros.
- UE4 naming conventions: rules for console variables and log categories.
Download the new ReSharper C++ EAP build from our website, or via the Toolbox App.
DOWNLOAD RESHARPER C++ 2020.1 EAP
Rearrange code
ReSharper С++ now allows you to quickly rearrange expressions, statements, and other elements in your code. To rearrange code, press Ctrl+Shift+Alt over the code element or selection that you want to move, then press any arrow key. If you invoke this command without first selecting something, the movable element is selected automatically.
The Move up and Move down commands are pretty straightforward – they can move_0<<
You can also move opening or closing braces up or down to expand or shrink the current compound statement, type, or namespace:
There are also):
Complete Statement
The Complete Statement feature inserts required syntax elements and puts the caret in a position where you can start typing the next statement.
With ReSharper C++, you can now use the Ctrl+Shift+Enter shortcut instead of having to perform lots of other small actions. For example, this shortcut will automatically insert braces and a semicolon, and then put the caret where you can proceed to write the body:
Completion in macro definitions
It is hard to believe that code completion had remained unsupported in macro definitions. Until now! Starting with 2020.1, ReSharper C++ now provides context-aware code completion in macros to help you be even more productive:
UE4 naming conventions
We introduced support for Unreal Engine naming conventions a year ago in the 2019.1 release. But if there is something to improve, we are always ready to do it. In 2020.1, we’ve upgraded our naming rules with two small entries: ReSharper C++ now works better with the names of console variables and log categories.
Those are all the major improvements in this update. The full list of changes, including bug fixes and minor improvements, can be found in our issue tracker.
DOWNLOAD RESHARPER C++ 2020.1 EAP
Your ReSharper C++ team
JetBrains
The Drive to Develop
5 Responses to ReSharper C++ 2020.1 EAP: Rearrange Code, Code Completion, and UE4 Naming
Mike Diack says:March 26, 2020
Now that LLVM 10.0.0 final is out (with builds at: ), is it possible that Resharper could use it soon? (I appreciate it may now be too late to use it in 2020.1, given that I guess you are at feature freeze). Even if not, surely you can update Resharper to use clang-tidy 9.0.1 – currently you are using 9.0.0, which dates back to last Summer, 9.0.1 came out in Dec 2019.?
Thanks!
Igor Akhmetov says:March 26, 2020
Hello Mike,
Clang-tidy 10 will be bundled in EAP 7. To be fair, 9.0 was released at the end of last September. We do not usually update the bundled binary with minor LLVM releases, since the introduced fixes usually do not have to do with clang-tidy. But if you’re missing something specific from a minor release, do let us know.
Rick says:April 18, 2020
It’s all nice, but somehow it’s also all so hollow and overwhelming. How many keyboard shortcuts are there to remember? 100? 1000? Its the same with all modern software, Resharper is no exception. When I read about updates, I skim the text and think “Yeah, nice, but I know I’m never going to use it, because I can’t remember it all”
Anastasia Kazakova says:April 20, 2020
You can limit it all to just Alt+Enter or Ctrl+Shift+A to find all available actions!
Bart says:May 27, 2020
My most missed IntelliJ feature has arrived 🙂 Yeeey 🙂 | https://blog.jetbrains.com/rscpp/2020/03/26/resharper-cpp-2020-1-eap6/?replytocom=5354 | CC-MAIN-2021-10 | refinedweb | 696 | 63.7 |
Republished from the original blog post by Matthew Demyttenaere on the CoScale Container Monitoring blog
OpenShift provides a basic monitoring layer to watch the health and resource usage of your containers, but you may have a need for more detailed application performance metrics. Like with all container platforms, the challenge is how you can provide performance visibility across the entire stack without overloading your containers with intrusive monitoring agents or without manual labor to push metrics to a centralized service. Ideally, you would like to have one agent that automatically watches all containers running on the host, and scales out its monitoring as needed. And you would also like that one agent to look inside your containers and talk to your orchestrator to understand service relationships.
Introducing CoScale, a Red Hat Certified Technology Partner for monitoring OpenShift. CoScale integrates with Red Hat OpenShift to gather OpenShift-specific metrics and events. Together with Docker, Kubernetes, and in-container monitoring capabilities, CoScale can provide a detailed view into the inner workings of OpenShift, the containers managed, and the services inside the containers.
In this post, we'll go into the details of how to set up CoScale specifically for monitoring OpenShift.
Creating a Monitoring Agent
To start monitoring your OpenShift cluster with CoScale, you first have to add an agent to it. Our agent runs as a
DaemonSet with one containerized agent per node in your cluster. Starting the agent is as simple as copy and pasting the YAML configuration, which is automatically generated in the CoScale UI when you select to install CoScale on OpenShift. Prior to this, you will need to set up the Cocale service account with the correct administrative and security permissions. This is documented as part of our install process. Here is an example configuration of the agent:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
name: coscale-agent
name: coscale-agent
spec:
template:
metadata:
labels:
name: coscale-agent
spec:
serviceAccount: coscale
hostNetwork: true
containers:
- image: coscale/coscale-agent
imagePullPolicy: Always
name: coscale-agent
securityContext:
privileged: true
env:
- name: APP_ID
value: "01234567-1234-5678-9876-123456789101"
- name: ACCESS_TOKEN
value: "01234567-1234-5678-9876-123456789101"
- name: TEMPLATE_ID
value: "12"
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
- name: hostroot
mountPath: /host
readOnly: true
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
- hostPath:
path: /
name: hostroot
EOF
Once you execute this command on one of the master nodes of your Kubernetes cluster, the agent will automatically run on each node in your cluster and a new agent will automatically be started on each newly added node. That’s it! No additional steps are required. The agent is installed and it will start gathering metrics about your the OpenShift cluster. After a couple of minutes, new dashboards will appear in your CoScale application. This is one of the strong points of CoScale: You don’t have to create dashboards manually, but default dashboards are automatically created for you, depending on the technologies recognized.
OpenShift Dashboards
Let’s show off these new dashboards that have automatically been created for you. Specifically, we are interested in the OpenShift dashboards, as they show you the underlying metrics for OpenShift. CoScale provides you information on a server-centric and service-centric level, meaning you can filter to see the containers running on a node in your cluster or you can filter on a Replica Set/Service/Daemon Set/Stateful Set level.
The first dashboard that you will see is the Openshift Cluster Overview. As the name implies, this dashboard gives you a broad overview of the entire OpenShift cluster. This is not a standalone dashboard with isolated information, but it is an interactive dashboard with clickable widgets. Clicking on specific parts of the widgets will allow you to drill down and view more detailed information. This is a very powerful workflow, allowing you to start from a high-level overview and then drilling in as needed.
One of those detailed dashboards is the Replica Sets Overview. Within this dashboard you can see an overview of the important events for your replicas. In the example below, you can clearly see one important event, where one of the Replica Sets in your cluster has fewer replicas than you specified (desired vs. actual pods). When such important events occur, we allow you to click on the event to drill down and see detailed information about what is causing the missing pods.
Another interesting visualization on this dashboard is the Container topology widget, which shows you clearly how many Pods are running in each of the replica sets on your cluster. We also allow you to filter by namespace so you only see the replication controllers that are relevant to you. If anything starts to pique your interest here, you can drill down for more detailed information. For example, you can click on an individual container to see more details about its resource usage, or click on the green circle of the replica set for an aggregated view for all containers of a service.
To get a more detailed view of a service you can also click on the name inside the Replication controller overview widget. This will bring you to another dashboard containing metrics specific to the image that you have selected. Below you can see an example of this.
There are some special things to note here:
- The CoScale application has automatically selected the right replication controller and namespace for the service that was selected on the previous dashboard. It is always possible to change these simply by clicking one of the two dropdowns.
- The CoScale event system allows you to view changes in our environment. In this case, you can see that at a certain time the number of pods was scaled from 3 to 5 and this is clearly visible in the "Container Counts" widget.
- CoScale automatically keeps track of the containers running for a certain service, including start time, stop time, image name, tag, and exit code. You can see two new containers starting around the same time as the scale up.
We also have a CPU graph that shows you the CPU usage for this container group. These metrics come from the underlying Kubernetes service that manages the containers for Kubernetes.
Besides CPU, CoScale also gathers storage, memory, network, and filesystem usage metrics.
It is also possible to see this information from a server level, all the way down to a single container. Which permits you to quickly identify services or containers that are not performing as expected.
To manage all the metrics coming from your containers, CoScale also provides automatic anomaly detection on all incoming metrics. This provides alerts on your container metrics without the need to monitor them all with static alerts or by having to constantly look at dashboards. This is especially challenging in these dynamic container environments. As a result, you can detect issues more proactively and solve them faster. In the example below, we show an anomaly where a specific container started using more CPU resources as usual.
What is Going On Inside Your Containers?
Up till now, we have just discussed the metrics and events coming from OpenShift and your containers. To go even deeper, you can also set up the CoScale in-container monitoring service to see what is happening inside the containers. With the agent plugins, you can gather application metrics from your web services (NGINX, Apache, Tomcat, etc.), your databases (MySQL, PostgreSQL, Redis, ElasticSearch, etc.) and many more. To start gathering these detailed metrics, check out previous blog posts on the CoScale blog on in-container monitoring and monitoring based on Docker labels. We also support Prometheus endpoints for monitoring your images.
Conclusion
CoScale’s OpenShift monitoring capabilities allow you to get detailed performance and application insights from a Red Hat OpenShift environment. The easy-to-install agent via
DaemonSet auto scales together with your environment. Our automatically created dashboards let you drill into any aspect of your cluster, individual containers, or services, to monitor your entire environment. On top of that, our anomaly detection will automatically alert you when performance issues occur.
Do you want to know more? Then check out our OpenShift Commons webinar recording on Proactive Performance Management of OpenShift.
Categories | https://www.openshift.com/blog/monitor-red-hat-openshift-coscale | CC-MAIN-2020-16 | refinedweb | 1,371 | 51.89 |
FTW(3) BSD Programmer's Manual FTW(3)
ftw, nftw - traverse (walk) a file tree
#include <ftw.h> int ftw(const char *path, int (*fn)(const char *, const struct stat *, int), int maxfds); int nftw(const char *path, int (*fn)(const char *, const struct stat *, int, struct FTW *), int maxfds, int flags);
These functions are provided for compatibility with legacy code. New code should use the fts(3) functions. des- cended processes the directory before the directory's contents. The maxfds argument specifies the maximum number of file descriptors to keep open while traversing the tree. It has no effect in this implementa- tion. The nftw() function has an additional flags argument with the following possible values: FTW_PHYS Physical walk: don direc- tory will be restored to its original value before nftw() re- turns..
The ftw() and nftw() functions may fail and set errno for any of the er- rors fol- lows: [EINVAL] The maxfds argument is less than 1 or greater than OPEN_MAX.
chdir(2), close(2), open(2), stat(2), fts(3), malloc(3), opendir(3), readdir(3)
The ftw() and nftw() functions conform to IEEE Std 1003.1-2001 ("POSIX").
The maxfds argument is currently ignored. MirOS BSD #10-current May 20,. | http://mirbsd.mirsolutions.de/htman/i386/man3/ftw.htm | crawl-003 | refinedweb | 205 | 62.98 |
None of the demos that I've seen for Draft-js (by Facebook, built on React) show how to clear the input field after submit. For example, see this code pen linked to from awesome-draft-js where the value you submit remains in the input field after submit. There's also no function in the api that seems designed to do it. What I've done to achieve that is to create a new empty state on button submission like this
onSubmit(){
this.setState({
editorState: EditorState.createEmpty(),
})
}
this.state = {
editorState: EditorState.createEmpty(),
};
It is NOT recommended to use EditorState.createEmpty() to clear the state of the editor -- you should only use createEmpty on initialization.
The proper way to reset content of the editor:
import { EditorState, ContentState } from 'draft-js'; const editorState = EditorState.push(this.state.editorState, ContentState.createFromText('')); this.setState({ editorState });
@source: | https://codedump.io/share/cIEgwetOc0l5/1/how-to-clear-input-field-in-draft-js | CC-MAIN-2016-50 | refinedweb | 143 | 50.33 |
Tell us what you think of the site.
Does anyone know if the new HumanIK retargeting system is fully script accessible? I’ve been trying to find if there’s a way to batch process transfer one characters animations to another character. The major roadblock I’ve run into so far is that there doesn’t seem to be a HumanIK equivalent to the old ‘retarget’ mel command. I tried turning on Echo All Commands and hitting ‘Apply Retargeting’ but no userful command showed up in the script edtior, and I can find nothing in the documentation about HumanIK.
Any help would be appreciated,
-Mike
I didn’t see any new commands for HIK, but did you look at others/characterPipeTool.mel?
Ah, thanks! Based on that script it looks like their function changeCharacterInput is a good starting point for me to look at. Too bad there’s no documentation for it. :/
hi Mike,
We are looking at batching the retargeting process as well. Did you find anything in the others folder?
Did the proc. changeCharacterInput helped you out?
Thanks,
Marc
Never mind, I found it :-)
Cheers,
Hey, I am having the same problem, how did you figure it out?
I am trying to script the retargeting, but I cant seem to figure out how to change the Retarget Input in the character pipe tab in the HumanIK UI.
I have found and am inside the characterPipeTool.mel, and I know these lines have something to do with it:
“global proc changeCharacterInput(string $pCharacter, string $pNewCharacterSrc,string $layerName,string $controlRigLayerName)”
{
please help
this is the only place on the magical internet that comes close to addressing this issue/question.
can somebody please help out, I know u guys know whats up
Sorry, hadn’t checked this board in awhile. After digging deeply into this problem, I was eventually able to write a complex, batachable animation transfer system for our game. Its been a few months, but here’s the relevant code IIRC (in Python):
import maya.cmds as mc
import maya.mel as mel
mel.eval("displayHIKPipeUI;") ### show the HIK windowmc.optionMenu("gCharPipeCharList",e=True, value=RigName) ### sets the source charactermel.eval("changeCurrentCharacter(\"gCharPipeCharList\" )")
mc.optionMenu("gCharPipeCharInputList",e=True,sl=2) #### sets the 'retarget input' option menu. Had to be done via index rather than name
mel.eval("changeCharacterInputFromUI;") ### applies the animation to the target character
mc.deleteUI("hikRetargetWindow")## delete the HIK window
In mel, the retarget input change line is:
optionMenu -e -sl 2 "gCharPipeCharInputList"
Basically, to make it work, I had to do a little hackery. First I load in both HIK rigs (not in the code above). Then, I show the HIK window and have the script change the retargeting input option menu directly. Note that I had to use the index of the character to change it to (in this case 2), not the name. After that I apply the retargetting and delete the window. A little goofy, but it works well.
Hope this helps,
Mike | http://area.autodesk.com/forum/autodesk-maya/mel/is-humanik-accessible-to-mel/page-last/ | crawl-003 | refinedweb | 500 | 65.73 |
08 April 2009 22:18 [Source: ICIS news]
SAO PAULO (ICIS news)--Hexion Specialty Chemicals will increase by threefold its investment in a new resin plant in southern ?xml:namespace>
The increase will bring Hexion's total investment in the plant to Brazilian reais (R) 110m ($50m) from R32.5m, according to the company.
The decision was based on a positive market scenario for the Brazilian state and proximity to buyers, which should reduce freight and transportation costs, company sources said.
The 450,000 tonne/year unit will be built in
As a result of the plant,
Hexion currently operates three other units in
($1 = R 2.22)
For more on Hex | http://www.icis.com/Articles/2009/04/08/9207216/hexion-to-increase-investment-in-brazilian-resin-plant.html | CC-MAIN-2015-14 | refinedweb | 112 | 62.38 |
Exporter::Proxy - Simplified symbol export & proxy dispatch.
package My::Module; use Exporter::Proxy qw( foo Bar ); # at this point users of My::Module will get # *My::Module::foo and *My::Module::Bar # installed. # # My::Module also gets an 'exports' method # that lists the exported items; array refs # are exported as copies by value. my @exported = My::Module->exports; my $object = My::Module->construct; my $exported = $object->exports; package Some::Other; use My::Module qw( foo ); # only exports foo use My::Module qw( Bar ); # only exports Bar use My::Module qw( bar ); # croaks, 'bar' is not exported. # caller can specify the items to export by # name -- not type. foo might be used as a # subroutine, Bar as an array, or foo may # be overloaded with &foo, %foo, @foo, $foo. first { $value eq $_ } @Bar # $value ~~ @Bar or croak "Invalid '$value'"; delete $foo{ somekey } or croak "Oops: foo is missing 'somekey'"; my $bletch = $foo || 'oops, no $foo'; # if the caller does not want to export # anything from the module when it is used, # ":noexport" does just that. use My::Module qw( :noexport ); # there are times when it is easier to use # a dispatcher for things like service classes # than to pollute the caller's namespace with # all of the available methods. use Exporter::Proxy qw( dispatch=do_something ); # at this point 'do_something' is installed in # My::Module. it splices out the second # argument, uses My::Module->can( $name ) to # check if the module has the service availble # and then dispatches to it via goto. # # My::Module->exports will include the dispatcher, # in the last example it will have only the # dispatcher since no other names were included. # # now modules use-ing this one look like: use My::Module; my $object = My::Module->construct; $object->do_something( foo => @foo_args ); my @test_these = $object->exports; my $test_ref = $objeect->exports; # @test_these will be qw( do_something ) # $test_ref will be an arrayref of a # copy of the exported values (i.e., # modifying $test_ref does not affect # the exported items.
This installs 'import' and 'exports' subroutines into the callers namespace. The 'import' does the usual deed: exporting symbols by name; 'exports' simplifies introspection by listing the exported symbols (useful for testing).
The optional "dispather=name" argument is used to install a dispatcher. This allows the module to offer a variety of services without polluting the caller's namespace with too many of them. All it does is check for $module->can( $name ) and goto &$handler if the module can handle the request.
The arguments to this are the symbol names to export with an optional "dispatch=<name>" for installing the dispatcher.
The import extracts the exported symbols, adding the dispatcher's name to the list if necessary, and then installs import, exports, and the dispathcher as necessary.
With no arguments the import uses the original exports list, pushing all of the symbols into the caller's space.
The optional argument ':noexport' short-circuts export of any symbols to the caller's space.
Other than ':noexport' any arguments with leading colons are silently ignored by import.
Anything without a leading colon is assumed to be a name, and is checked againsed the exports list. If it is on the list then the caller's $name symbol is aliased to the source module's.
Note that this is not a copy-by-value into the caller's space, it is aliaing via the symbol table.
i.e.,
my $dest = qualify_to_ref $name, $caller; my $src = qualify_to_ref $name, $source; *$dest = *$src;
Callers modifying their copy of the item will be modifiying a global copy.
Aside: Once read-only references are avaialble then they will be an option.
Mainly for testing, calling:
$module->exports;
or
$object->exports
returns an array[ref] copy of the exported names.
When exporting a large number of symbols is problematic, a dispatcher can be installed instead. This splices off the second argument, checks that the module can perform the name, and does a goto.
Calls to the dispatcher look like:
$module->$dispatcher( $name => @name_argz );
The dispatcher splices $name off the stack, checks that $module->can( $name ) (or $object can), croaks if it cannot or does a goto &$handler.
Note that the dispatcher can only be exported once: the last dispatch=name will be the only one installed.
For example:
package Query::Services; use Exporter::Proxy qw( dispatch=query ); sub lookup { ... } sub modify { ... } sub insert_returning { ... }
allows the caller to:
use Query::Services; # caller now can 'query', which can dispatch # calls to lookup, modify, and insert_returning. __PACKAGE__->query( modify => $sql, @argz ); $object->query( lookup => @lookup_argz );
A more general use of this is combining a number of service classes with a single 'dispatcher' class that users others. In this case various separate My::Query::* modules help break up what would otherwise be a monstrosity into manageable chunks. They can use fairly short names that are obvious in context becuase the names only propagate up to My::Query.
My::Query can even use "if" to limit the number of services available (e.g., only packages that already have an 'IsSafe' method have the modify calls available.
package My::Query::Handle; use Exporter::Proxy qw ( connect prepare disconnect fetch non_fetch insert_returning ); # implementations... package My::Query::Lookup use Exporter::Proxy qw ( lookup single_vale ); ... package My::Query::Modify use Exporter::Proxy qw ( insert insert_returning update ); ... # all this needs is to install a dispatcher # and pull in the modules that implement the # methods it dispatches into. package My::Query; use Exporter::Proxy qw( dispatch=query ); use My::Query::Handle; use My::Query::Lookup; use if $::can_modify, 'My::Query::Modify'; __END__ # the object class use-ing My::Query gets a # "query" method without having its namespace # polluted with "insert", "modify", etc. use My::Query; ... $object->query( lookup => $sql, @valz );
The exports method provides a simple technique for baseline testing of modules: check that they can be used and actually can do what they've claimed to export.
Say your tests are standardized as '00-Module-Name-Here.t'.
use Test::More; use File::Basename; # whatever your naming convention is, # munge it into a package name. my $madness = basename $0, '.t'; $madness =~ s/^ \d+ - //; $madness =~ s/-/::/g; use_ok $madness; my @methodz = ( qw ( import exports ), $madness->exports ); ok $madness->can( $_ ), "$madness can '$_'" for @methodz; done_testing; __END__
Symlink this to whatever modules you need testing and "prove t/00*.t" will give a quick, standard first pass as to whether they compile and are minimally usable.
Steven Lembark <lembark@wrkhors.com>
Copyright (C) 2009 Workhorse Computing. This code is released under the same terms as Perl 5.10.0, or any later version of Perl, itself. | http://search.cpan.org/~lembark/Exporter-Proxy-0.06/lib/Exporter/Proxy.pm | CC-MAIN-2015-18 | refinedweb | 1,095 | 53.21 |
In Relation To Nicklas Karlsson
In Relation To Nicklas Karlsson!
Contexts in JSR-299 is so important that it actually was incorporated in the name after a few attempts so let's have a look at what the contexts and scopes really are. We'll be using Weld as the reference when we want to get into implementation details but the core stuff is implementation agnostic and follow the rules of the specification. This blog posting is sort of a "scopes and contexts for dummies" with some simplifications, for a more formal approach you might want to Read The Fine Specification.
Contexts
A context is like a labeled shelf. There are shelves named @RequestScoped, @SessionScoped etc. When you use a bean of a particular scope, a bean instance is fetched from that shelf (or one is created and placed there if it wasn't already there) so that you are guaranteed to get the same instance the next time you reference it (withing the same lifecycle)
As a curiosity can be mentioned that there can actually be many shelves for a particular scope (as long as only one is active at any given time). 5 minutes of fame and honor to the one that can find a usecase for that. I think it has something to do with child activites that once was a part of the spec.
Scopes
The scope is the link between a bean and a context. Different contexts/scopes have different lifecycles and that determines for how long your stuff should be kept on the shelf. The shelf is automatically wiped clean when the scope hits it's end-of-life (e.g. the contents of the session context is destroyed when the session expires etc). There is no cheating, in Seam you could place almost any bean in any scope but in CDI, a bean of a particular scope always end up on the corresponding shelf. A bean has one and only one scope.
Proxies
Proxies are wrappers. There are different kinds of proxies, there are placeholder proxies used at injection points for normal scoped beans that look up the real instance before invoking methods on it and there are proxies that wrap interceptors and stuff around the real instance of the bean class.
Behind the scenes
So what really happens when you have something like this
@Model public class Foo { @Inject Bar bar; public String getPong() { return bar.pong(); } }
and
@SessionScoped public class Bar implements Serializable { public String pong() { return "pong"; }; }
is that Weld at boot time creates the beans for both Foo and Bar. Foo is a @Named @RequestScoped bean (@Model stereotype) and Bar is a @SessionScoped one. There are no actual instances created at this time but there is a proxy injected at
@Inject Bar bar;
which has the ability to look up the Bar instance when needed. At this point both the request context and session contexts are empty. Now for the magic. When you in a xhtml page do something like
#{foo.pong}
the EL-resolver kicks in and comes to the conclusion that we need the named (EL-available) bean
foo as a starting point. The BeanManager will resolve the bean and notice that Foo is @RequestScoped so it will ask the RequestContext for an instance. Since the request context is empty, an instance is generated, using the
Foo-bean as template. This instance is placed in the request context in case someone needs it during the same request.
The getPong() method on that instance is called which hits the proxy for Bar. Since Bar is @SessionScoped, the BeanManager consults the session context for an instance. Since there is none there, one is created, and stuck in the session context for later reference. This instance is the one that has it's
pong() method invoked.
When the request is over, the request context is destroyed and any destructors of Foo are called but since the session context is still alive (the session didn't terminate), the instance for Bar stays alive.
Now another request from the same webapp comes along. The request scope is once again empty and we have a new instance for foo but this time bar is already found in the session context and we get the same instance of Bar. Just as we should since we have a new request (fresh request context) but same 'ol session (old session context). In this way we can mix injection between contexts - we can inject @RequestScoped beans in @SessionScoped ones and vice versa and the proxies will make sure the correct instances are hit (or created) in the contexts.
Context not active
All contexts are not active all the time. Getting a
context not active exception from an @ApplicationScoped bean is a bit tricky but the request context is only available when there is an active HTTP request and the session context is only available when there is an active HTTP session. It means that accessing beans in those contexts are no-go from e.g. MDB:s. Why is this?
Weld-specific stuff ahead, using the request context as an example: There is only one request context. Really. All 5000 concurrent users of your webapp share the same request context instance. Yup, the actual context data is stored in a ThreadLocal<BeanStore> field which means that (lucky for us) each users thread has own data in the field.
BeanStore is an interface and the particular implementation of it that is stuck in the ThreadLocal field of the request context is wrapping a HTTP servlet request. This means that adding stuff to that bean store actually places it in the request and clearing the bean store removes all keys from the request that belong to that bean store. The request attributes names are filtered through a naming scheme so that attributes that start with the @RequestScoped class name are considered belonging to that bean store.
What this means in practice is that the servlet request has to be there in order to work as backing storage for the bean store in the request context. The active-flag of the context is just there to indicate that at this point, Weld has populated that context with a true bean store.
So actually, you can't put stuff directly on the @RequestScoped shelf. You are only allowed to put things in a box (the servlet request backed bean store) Weld has placed on the shelf. The placing of the box is coupled to the activation of the context and the removal of the box is coupled to the deactivation. These activations and deactivations are handled by servlet request listeners that watch for the creation and destruction of the request.
When we're talking request scope, the context is short-lived and a new box is placed on the shelf when the request begins and it is cleared out when it ends but how about session scope? The story is very similar with the exception that the stuff belonging to the session bean store is not destroyed when the request ends. When the next request comes in, the session context is re-populated with a bean store that wraps the HTTP session. Since the previous request also used the HTTP session as backing storage, stuff placed in the session context then lives on in session attributes starting with the @SessionScoped class name. As you notice, for both request and session context, naming plays an important role. There are other things floating around the HTTP request and session attributes and we don't want to hit anything not belonging to us when we e.g. clear out a bean store.
It is important to realize that the
limitations are not that of Welds implementation. Sure, there could be other ways of implementing the contexts that would make them appear available all the time but they could per definition not be semantically correct. You could also implement stuff without writing directly e.g. to the underlying HTTP session but when it comes to session replication, it's a Good Thing.
As a curiosity can be mentioned that due to the fact that the specification say the session context must survive session invalidation (which is not a Good Thing when you have direct HTTP session backed beanstore and try to access it after invalidation) there is a sort of buffering built into the session context that loads on init and caches stuff if the session should go AWOL at some point.
Honey, we need to have a conversation
The conversation context is actually like a named sub-session-context. The @RequestScoped shelf can have only one box where stuff is stored in but on the @ConversationScoped shelf there can be many boxes. The labels on the boxes is the conversation ID. Conversations can be either transient (the default) and in this case it behaves much like the request context and only lives during one HTTP request. This is not very useful, therefore conversations can be promoted to non-transient (long-running). This is achieved by calling the begin() or begin(String) methods on the Conversation bean that's available for injection. When the request ends, the Conversation Manager decides the fate of the conversation context by looking at the transient flag of the Conversation. If it is transient, the conversation context is destroyed, if it's non-transient it's left alone and the generated (or assigned) conversation ID is passed along as a HTTP parameter to the next request.
The Conversation Manager is also the one which determines if we should start with an empty, transient conversation and conversation context or if we should resume an existing one when a request comes in. The
cid parameter is looked for in the incoming HTTP request. No parameter present means transient conversation but if there is a value for cid, say 1, the conversation manger picks down the box labeled
1 from it's storage and loads it into the conversation context. Well, actually it wraps the HTTP session with a conversation-beanstore suited to match that conversation id (again, naming play an important role) and pre-populates the Conversation instance with the cid and transient flagged to false. We are free to switch conversations, promoting and demoting them and the HTTP session will mirror those contexts through its attributes, suited to match the names of the conversations.
If you think you can get cute and do some URL-rewriting by modify the passed-along cid-parameter feel free. Since your own session is used as backing storage, the only thing you can do is switch to another conversation of your own (or bomb out because the cid was not known). There is no way you can access conversational data from another user, even the specification says you can't cross the HTTP session boundary.
As a curiosity could be mentioned that it's harder and harder with modern browsers to actually get new sessions since tabbed browsings etc have brought along the side-effect that the session and cookies are shared among browser instances. Earlier it was usually enough to start a new browser but nowadays you usually have to use incognito/private browsing to test the session boundaries with the same brand of browser.
Conclusion
Hopefully you are now confused at a higher level. You might also want to check out the specification on how to write custom contexts and scope types. You might also want to have a look at the Weld source code in the org.jboss.weld.context package and look at how the BeanManagerImpl does the resolving.
In this shortish part, we'll add some interfaces to our application so external users can read the current greetings. Expanding the interfaces so greetings can be added is left as an exercise for the reader
JAX-RS
REST is hip (and now in EE 6 as JAX-RS) so let's throw it in. Add
package com.acme.greetings; import java.util.List; import javax.inject.Inject; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path(value = "/RESTExport") public class RESTExporter { @Inject GreetingServer greetingServer; @GET @Path("/greetings") @Produces("application/xml") public List<Greeting> getGreetings() { return greetingServer.getGreetings(); } }
That fetches the current greetings (notice the injection) from the server and presents them in XML format. To hook up the JAX-RS implementation, RESTEasy, add the following to web.xml
<context-param> <param-name>resteasy.scan</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>resteasy.servlet.mapping.prefix</param-name> <param-value>/resteasy<>
and the @XMLRootElement annotation to the top of our Greeting class
@Entity @Audited @XmlRootElement public class Greeting
Your greetings should now be available from /Greetings/resteasy/RESTExport/greetings.
JAX-WS
Adding a Web Service is even more simple. Add
package com.acme.greetings; import java.util.List; import javax.inject.Inject; import javax.jws.WebService; @WebService public class WebServiceExporter { @Inject GreetingServer greetingServer; public List<Greeting> getGreetings() { return greetingServer.getGreetings(); } }
That does a similar job as our RESTExporter and then hook it up in web.xml
<servlet> <servlet-name>WSExport</servlet-name> <servlet-class>com.acme.greetings.WebServiceExporter</servlet-class> </servlet> <servlet-mapping> <servlet-name>WSExport</servlet-name> <url-pattern>/WebServiceExport/*</url-pattern> </servlet-mapping>
Hmm. Wonder if you can make it auto-register? Anyway, the WDSL should be viewable from /Greetings/WebServiceExport?wsdl
Conclusion
This was a short one. Partly because setting things up is really straightforward and don't require us to do that many workarounds. Hopefully, once Aslak finishes the Arqullian DBUnit integration (I already heard rumors on JSFUnit integration) I can be back with a more thorough article on testing all parts of the application.
My wife has delivered our daughter and JBoss has delivered a snapshot of the AS 6 that can do most of the stuff we set out to do, so let's move on!
The good, the bad and our daily workarounds
So, since we're living on the edge here, head over to JBoss Hudson and grab the latest successful build (#1750 or later). Install the envers.jar in the server and change the http listener port like we did in the end of part II. Remember to update your JBOSS_HOME environment variable if you use one.
There is a slight regression with the auto-registration of the faces servlet so add this to your ZipException is gone so clear out your faces-config.xml from the merged ICEfaces stuff and delete the icefaces directory from your local repository so you can pick up fresh copies of the original jars.
There is also another regression that has something to do with redirects and the ICEfaces PushRenderer (perhaps). If you go straight to the /Greetings context root, the PushRenderer is not that pushy, you'll have to use the full /Greetings/faces/greetings.xhtml for now.
Look mama, no EARs
We're going EJB and let's do some refactorings while we're at it. Remove the GreetingBean and create the following classes
package com.acme.greetings; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import java.util.concurrent.Future; import javax.annotation.PostConstruct; import javax.ejb.Lock; import javax.ejb.LockType; import javax.ejb.Stateful; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.event.Event; import javax.inject.Inject; import javax.inject.Named; import org.icefaces.application.PushRenderer; @Stateful @ApplicationScoped @Named public class GreetingServer implements Serializable { private static final long serialVersionUID = 1L; List<Greeting> greetings = new ArrayList<Greeting>(); @Inject @Added Event<Greeting> greetingAddedEvent; @Inject GreetingArchiver greetingArchiver; @Inject GreetingEcho greetingEcho; @PostConstruct public void init() { greetings = greetingArchiver.loadGreetings(); } @Lock(LockType.WRITE) public void addGreeting(Greeting greeting) { greetings.add(greeting); PushRenderer.render("greetings"); greetingAddedEvent.fire(greeting); Future<Boolean> result = greetingEcho.echo(greeting); } @Lock(LockType.READ) public List<Greeting> getGreetings() { return greetings; } }
and
package com.acme.greetings; import javax.annotation.PostConstruct; import javax.ejb.Stateful; import javax.enterprise.inject.Model; import javax.inject.Inject; import org.icefaces.application.PushRenderer; import com.icesoft.faces.context.effects.Appear; import com.icesoft.faces.context.effects.Effect; @Stateful @Model public class GreetingClient { Greeting greeting = new Greeting(); @Inject GreetingServer greetingServer; @PostConstruct public void init() { PushRenderer.addCurrentSession("greetings"); } public Effect getAppear() { return new Appear(); } public Greeting getGreeting() { return greeting; } public void setGreeting(Greeting greeting) { this.greeting = greeting; } public void addGreeting() { greetingServer.addGreeting(greeting); } }
The GreetingServer keeps the greetings and the client has injected the server and interacts with it (we've thrown in some annotations for concurrency control to the greeting list). You might notice we also have a new injection
@Inject GreetingEcho greetingEcho;
which is called like
Future<Boolean> result = greetingEcho.echo(greeting);
This is an asynchronous call to GreetingEcho that looks like
package com.acme.greetings; import java.util.concurrent.Future; import javax.ejb.AsyncResult; import javax.ejb.Asynchronous; import javax.ejb.Stateless; @Stateless public class GreetingEcho { @Asynchronous public Future<Boolean> echo(Greeting greeting) { System.out.println(String.format("Got a new greeting: %s", greeting.getText())); return new AsyncResult<Boolean>(true); } }
This is kind of funny EE 6 construct since it really exits the @Asynchronous method immedeately, returning a handle to the task. Well, good luck trying to cancel that output before it finished. This is kind of JMS-light. And yes, it's obviously used just for show-off here.
You'll also notice that our DB bean has gotten a new look
package com.acme.greetings; import java.util.Date; import java.util.List; import javax.ejb.Stateless;; @Stateless public class GreetingArchiver { @Inject @GreetingDB EntityManager db;) { db.persist(greeting); } public List<Greeting> loadGreetings() { Date tenMinutesAgo = new Date(); tenMinutesAgo.setTime(tenMinutesAgo.getTime() - 10 * 60 * 1000); return db.createQuery(loadQuery).setParameter(timestampParam, tenMinutesAgo).getResultList(); } }
The persistence has been much simplified thanks to CMT.
And finally the view to use them
< <h:message <h:commandButton <h:dataTable <h:column> <h:outputText </h:column> </h:dataTable> </h:form> </h:body> </html>
JMS and JavaMail
Speaking of JMS, let's add a MDB
package com.acme.greetings; import javax.annotation.Resource; import javax.ejb.ActivationConfigProperty; import javax.ejb.MessageDriven; import javax.jms.Message; import javax.jms.MessageListener; import javax.jms.TextMessage; import javax.mail.MessagingException; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; @MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName = "destination", propertyValue = "topic/GreetingTopic"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic") }) public class GreetingMailer implements MessageListener { @Resource(mappedName = "java:/Mail") Session mailSession; @Override public void onMessage(Message message) { TextMessage textMessage = (TextMessage) message; try { mail(textMessage.getText()); } catch (Exception e) { // Forgive me, for I have sinned System.out.println(String.format("No mailing: %s", e.getMessage())); } } private void mail(String greeting) throws MessagingException { MimeMessage message = new MimeMessage(mailSession); message.setFrom(new InternetAddress("greetingmaster@nowhere.org")); message.addRecipient(javax.mail.Message.RecipientType.TO, new InternetAddress("you@yourplace.com")); message.setSubject("New greeting has arrived!"); message.setText(greeting); Transport.send(message); } }
This is a message driven bean that listens to the topic
/topic/GreetingTopic and when a message is recieved, it grabs the default Mail session from
the application server and send off a notification, configure it in the mail-service.xml in the deploy directory. You should also add the topic to the
deploy/hornetq/hornetq-jms.xml like
<topic name="GreetingTopic"> <entry name="/topic/GreetingTopic"/> </topic>
There are probably better ways of deploying topics but I'm not writing a JBoss AS tutorial here. This MDB isn't going to see any action unless someone actually send it messages. Let's add a
package com.acme.greetings; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.enterprise.event.Observes; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.JMSException; import javax.jms.MessageProducer; import javax.jms.Session; import javax.jms.TextMessage; import javax.jms.Topic; @Stateless public class JMSDispatcher { @Resource(mappedName = "/ConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "/topic/GreetingTopic") Topic topic; public void broadcast(@Observes @Added Greeting greeting) throws JMSException { Session session = null; MessageProducer producer = null; Connection connection = null; try { connection = connectionFactory.createConnection(); session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); producer = session.createProducer(topic); TextMessage message = session.createTextMessage(greeting.getText()); producer.send(message); } finally { if (producer != null) { producer.close(); } if (session != null) { session.close(); } if (connection != null) { connection.close(); } } } }
This bean observers the same added greeting as the GreetingArchiver and sends the greeting of to our topic. There are probably better ways of writing this but I'm not writing a JMS tutorial (either). Hmm, I forget what kind of tutorial I'm actually writing.
Something for the future
This snapshot doesn't have everyting yet, I'd like to have @Scheduled greetings so we don't get lonely. This doesn't actually work yet, but let's add it and keep upgrading our snapshots so perhaps one day we'll get lucky.
package com.acme.greetings; import java.util.ArrayList; import java.util.List; import java.util.Random; import javax.ejb.Schedule; import javax.ejb.Singleton; import javax.inject.Inject; @Singleton public class ArbiBean { static final List<String> comments = new ArrayList<String>() { private static final long serialVersionUID = 1L; { add("Tried the new JRebel? I heard it can read your thoughts?"); add("Where can I find the specs?"); add("Shouldn't you be using RichFaces 4 instead?"); add("How is the Seam 3 performance compared to Seam 2?"); add("PIGBOMB!"); } }; @Inject GreetingServer greetingServer; @Schedule(second = "*/15") public void comment() { greetingServer.addGreeting(new Greeting(getRandomComment())); } private String getRandomComment() { return comments.get(new Random(comments.size()).nextInt()); } }
This one should start up and send comments to the server every 15 seconds. But like I said, not yet.
Conclusion
This ends part IV. In part V we'll add JAX-RS and JAX-WS to share our greetings with the world. If you haven't noticed it by now, developing against snapshots and alpha-releases require a fair amount of workarounds. But as a developer you can choose - do you want to sit around and wait for the final product or do you want to join the round and get first-hand knowledge on things to come (filing JIRAs as you encounter issues).
In this third part we'll be hooking up JPA 2 with the static metamodel generator, Bean Validation and Envers
JPA
Let's get persistent. When we're talking persistence, we need a persistence.xml so let's make a folder META-INF src/main/resources and create one there
<persistence xmlns="" xmlns: <persistence-unit >
We're using the standard HSQL DefaultDS that comes with JBoss AS 6. If you want to be really hip, google around for @DataSourceDefinition which is a new kid on the block in EE 6 (haven't tried if AS 6 supports it yet, though)
Next, let's expand our model from Strings to a Greeting entity. Create a
package com.acme.greetings; import static javax.persistence.TemporalType.TIMESTAMP; import java.util.Date; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Temporal; import javax.validation.constraints.NotNull; import javax.validation.constraints.Size; @Entity public class Greeting { @Id @GeneratedValue int id; String text; @Temporal(TIMESTAMP) Date created = new Date(); public int getId() { return id; } public void setId(int id) { this.id = id; } public String getText() { return text; } public void setText(String text) { this.text = text; } }
change the GreetingBean to
package com.acme.greetings; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.event.Event; import javax.inject.Inject; import javax.inject.Named; import org.icefaces.application.PushRenderer; @ApplicationScoped @Named public class GreetingBean implements Serializable { Greeting greeting = new Greeting(); List<Greeting> greetings = new ArrayList<Greeting>(); @Inject @Added Event<Greeting> greetingAddedEvent; @Inject GreetingArchiver greetingArchiver; @PostConstruct public void init() { greetings = greetingArchiver.loadGreetings(); } public void addGreeting() { greetings.add(greeting); greetingAddedEvent.fire(greeting); greeting = new Greeting(); PushRenderer.render("greetings"); } public Greeting getGreeting() { return greeting; } public void setGreeting(Greeting greeting) { this.greeting = greeting; } public List<Greeting> getGreetings() { PushRenderer.addCurrentSession("greetings"); return greetings; } public void setGreetings(List<Greeting> greetings) { this.greetings = greetings; } }
We have also injected an event that is fired when comments are added:
@Inject @Added Event<Greeting> greetingAddedEvent;
so we need a qualifier called Added: Added { }
and greetings.xhtml to
< <h:commandButton <h:dataTable <h:column> <h:outputText </h:column> </h:dataTable> </h:form> </h:body> </html>
(we changed the order of the table and the input fields as it was getting annoying to have the field and button move down as we add comments)
Of course since we are firing events it would be nice if someone is actually listening. Let's create a GreetingArchvier:
package com.acme.greetings; import java.util.List; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.event.Observes; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @ApplicationScoped public class GreetingArchiver { @PersistenceContext EntityManager db; @Inject UserTransaction userTransaction;() { return db.createQuery("from Greeting").getResultList(); } }
That observes @Added Greetings and store them to the database. Notice also the loadGreetings() method that GreetingBean calls in it's @PostConstruct to populate itself with old comments. Well, with create-drop in our persistence.xml there won't be much to load but let's fix that later.
JPA 2
That's all nice but we're trying to be livin' on the edge so let's bring in JPA and typesafe queries and with those we better have some static metamodel generator, otherwise the attributes will quickly become a burden. There is Eclipse integration available (google around) but if you're doing automated maven-based builds, you're going to need this anyway. Since both Eclipse and Maven are involved in building, be prepared for some chicken-egg-project-cleaning-and-refreshing in Eclipse from time to time when adding new entities. Anyways, open up pom.xml and add some plugin repositories:
<pluginRepositories> <pluginRepository> <id>jfrog</id> <url></url> </pluginRepository> <pluginRepository> <id>maven plugins</id> <url></url> </pluginRepository> </pluginRepositories>
The maven-compiler-plugin will need an argument to not process annotations automagically once we slap jpamodelgen on the classpath
<configuration> <source>1.6</source> <target>1.6</target> <compilerArgument>-proc:none</compilerArgument> </configuration>
and the job should be taken over by our new build-plugins:
<plugin> <groupId>org.bsc.maven</groupId> <artifactId>maven-processor-plugin</artifactId> <version>1.3.5</version> <executions> <execution> <id>process</id> <goals> <goal>process</goal> </goals> <phase>generate-sources</phase> <configuration> <outputDirectory>target/metamodel</outputDirectory> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-jpamodelgen</artifactId> <version>1.0.0.Final</version> </dependency> </dependencies> <>
Dan Allen thinks this is a lot of configuration for this task, I'll have to remember to ask if he ever got his simple, elegant solution to place the artifacts in usable places ;-)
Run mvn eclipse:eclipse to have the target/metamodel added to eclipse and do the project level Maven, refresh project configuration.
Run the maven build and you should see the Greeting_ class appear in target/metamodel and in the WAR structure. Now let's bring it into use:
First we add EntityManger/EntityManagerFactory producers (the recommeded CDI way of wrapping them)
package com.acme.greetings; import javax.enterprise.inject.Produces; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.PersistenceContext; import javax.persistence.PersistenceUnit; public class DBFactory { @Produces @GreetingDB @PersistenceContext EntityManager entityManager; @Produces @GreetingDB @PersistenceUnit EntityManagerFactory entityManagerFactory; }
We also need a qualifier for that GreetingDB { }
Finally, let's modify or GreetingArchiver:
package com.acme.greetings; import java.util.Date; import java.util.List; import javax.enterprise.context.ApplicationScoped;; import javax.transaction.UserTransaction; @ApplicationScoped public class GreetingArchiver { @Inject @GreetingDB EntityManager db; @Inject UserTransaction userTransaction;() { Date tenMinutesAgo = new Date(); tenMinutesAgo.setTime(tenMinutesAgo.getTime() - 10 * 60 * 1000); return db.createQuery(loadQuery).setParameter(timestampParam, tenMinutesAgo).getResultList(); } }
Bean Validation
Adding Bean Validation is a breeze, just stick the annotations on the entity fields in Greeting:
@Size(min = 1, max = 50) String text;
and attach a message to the input field in greetings.xhtml:
<ice:inputText <h:message
I tried placing a @NotNull on text but it still failed on submit because the values came in as empty string (might be this is the designed behavior) so I used min = 1 instead.
Envers
If you would like to have auditing on your entities, you need to add the Envers dep to pom.xml
<dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-envers</artifactId> <version>3.5.1-Final</version> <scope>provided</scope> </dependency>
I cheated a little and marked it
provided as it was pulling in a lot of deps. I downloaded the envers.jar and dropped it in the server
common lib with it's other hibernate buddy-jar:s. After that we can stick an annotation on the entity
@Entity @Audited public class Greeting
Last but not least to enjoy automatic data auditing we need to add the Envers listeners to persistence.xml
" />
This concludes part II, next time we'll be looking at EJB:s and MDB:s in more details. And perhaps abandoning our WAR-only utopia for now. Guess I'll have to learn about the maven EJB plugin. Hints for good tutorials accepted.
PS. Have you checked the size of the WAR file after you've taken all these technologies into use? Around 320k. And of those ~85% are the ICEfaces libs (the only external deps)
This is part II of my series on how to set up a Java EE 6 application and stuff as many technologies into a simple application that can ever fit. And then some.
ICEfaces
Now let's pull in ICEfaces. The guys at RedHat/JBoss have a RichFaces fetisch so it's only fair that the competition gets some attention, too. And it increases my chances of getting a t-shirt in the ICEfaces blogging competition. Oh yeah, and it's a nice framework in general.
Now since we're living on the edge, lets pull in their 2.0 alpha3 version. Unfortunately, the new component set didn't make A3 so it's a bit vanilla at this stage. Anyway, let's add their snapshots repository to our pom.xml
<repositories> <repository> <id>ICEfaces snapshots</id> <url></url> </repository> </repositories>
and add a version property
<icefaces.version>2.0-A3</icefaces.version>
and add the deps
<dependency> <groupId>org.icefaces</groupId> <artifactId>icefaces</artifactId> <version>${icefaces.version}</version> <exclusions> <exclusion> <groupId>javax.faces</groupId> <artifactId>jsf-impl</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.icefaces</groupId> <artifactId>icepush</artifactId> <version>${icefaces.version}</version> </dependency>
Notice how we exclude the jsf-impl? We're using an appserver for crying out loud! No need to throw stuff like that in, it's already provided.
Next let's change out GreetingBean to actually do something more useful
package com.acme.greetings; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.enterprise.context.ApplicationScoped; import javax.inject.Named; import org.icefaces.application.PushRenderer; @ApplicationScoped @Named public class GreetingBean implements Serializable { String greeting; List<String> greetings = new ArrayList<String>(); public void addGreeting() { greetings.add(greeting); greeting = ""; PushRenderer.render("greetings"); } public String getGreeting() { return greeting; } public void setGreeting(String greeting) { this.greeting = greeting; } public List<String> getGreetings() { PushRenderer.addCurrentSession("greetings"); return greetings; } public void setGreetings(List<String> greetings) { this.greetings = greetings; } }
We've changed the GreetingBean to become @ApplicationScoped, that is, common for everyone so we have a sort of a chatroom. In getGreetings() we register the PushRenderer for our session and in addGreeting we trigged a render to all the currently active clients. This mechanism allows for DOM-changed to be pushed to the clients withoug need for active polling or browser refresh. Neat, huh?
Registering the PushRenderer sessions in a getter like this is a bit of a hack that stems from the fact that we have this application-scoped Grand Unified Bean that is initialized only once (for the first user triggering it) while the sessions need to be registered for each client on their FacesContext (I actually got this wrong the first time). No worries, we will fix this in the next part where we refactor it into GreetingServer and GreetingClient which have more appropriate scopes. Writing applications is a constant flow of refactorings and improvements, remember?
Let's for now forget about such trivialities as concurrent access to the list etc (we're moving it to a concurrency-controlled @Singleton later on). We're in a happy place...
Fixing the broken test is left as a exercise for the reader.
Now let's change our greetings.xhtml to use the new backing bean
<:form> <h:dataTable <h:column> <h:outputText </h:column> </h:dataTable> <h:inputText <h:commandButton </h:form> </h:body> </html>
Repack. Redploy. Re...what...the...$CURSE$ is a ZipException and what does it want from me? Oops. Known bug in M3, let's work around it
When the going gets tough, the tough get going. Locate icefaces-2.0-A3.jar from your local repository. Extract it's faces-config.xml, replace your own faces-config.xml with it (it was empty, anyways) and finally rip out the entire faces-config.xml from the icefaces jar (picture the heart scene from Indiana Jones and the Temple of Doom).
I could have said
move faces-config.xml from the icefaces jar to the application WEB-INF but that sounded a lot more cool.
Repack. Redeploy. Rejoyce. Business as usual.
Point your browser at and add some text. Now comes the nice part -
open another browser (another brand) or use the incognito mode (you know the mode where we can surf... gifts to our loved ones without
leaving traces in the browsing history) or the IE
new session to open another session to the application. This is important because
browsers commonly share session and cookies between tabs so we want to make sure there's no cheating involved here. Tap in some comment
in the other application and they should become visible immediately. Ta-daa...
ICEfaces 2
Now lets bring in some ice components. Wait, didn't you just say they were slipped from alpha3? Yes, the new ones but you are still able to use the components from the 1.8-series by adding some compatibility lib. Let's add the deps to pom.xml
<dependency> <groupId>org.icefaces</groupId> <artifactId>icefaces-compat</artifactId> <version>${icefaces.version}</version> </dependency> <dependency> <groupId>org.icefaces</groupId> <artifactId>icefaces-comps-compat</artifactId> <version>${icefaces.version}</version> </dependency>
And yes, to avoid the dreaded ZipException when deploying the application, you have to move the contents from the faces-config.xml files in the jars in your local repository into your applications own faces-config.xml before assembling and deploying the application.
Next, lets' modify the greetings.xml to add the namespace
<html xmlns="" xmlns:
and let's change the input field to the iced version that supports effects
<ice:inputText
and finally we add an effect-producer to GreetingBean
@Produces @RequestScoped public Effect getAppear() { return new Appear(); }
Bling-bling!
Some URL cleaning
As I mentioned before, the URL isn't that user-friendly, let's see what we can do about that. Let's start with the http listener port.
We have two options - read the documentation or search all xml files that contain the string
8080. Let's go with the second option
as I know that's how you do stuff usually anyway. bindings-jboss-beans.xml in server\default\conf\bindingservice.beans\META-INF sounds
promising. s/8080/80.
Next - the welcome file. Edit web.xml and add
<welcome-file-list> <welcome-file>faces/welcome.xhtml</welcome-file> </welcome-file-list>
(Hey! It's actually working! I was just getting to the part where I would complain about having to add a redirecting index.html)
Next - the web context root. Sure, it could be fixed by changing the name of the war as they are in synch if not otherwise stated but let's state otherwise. There is no standard way of doing this(?) so let's do the vendor specific thingie instead. Add jboss-web.xml next to web.xml containing
<jboss-web> <context-root>Greetings</context-root> </jboss-web>
Now we have gone from
to
A note on testing
As you expand your application to bring in new dependencies (such as ICEfaces) always keep in mind that testing those classes also means you have to provide the dependencies in the Arquillian deployment. So be sure to add the dependent jars, the view of the world the Arquillian testing archives have is what you pack in them manually (apart from what the appserver already provides).
This concludes part II, in the next part I'll get to JPA 2 and the static metamodel generator conf.
The goal of this blog post is to walk you through an Java EE 6 application from a simple, static
web page until we have a full blown stack that consist of the stuff in the list below. I'm calling this
stack
Summer because after a long, hard winter Spring may be nice but boy, wait until Summer kicks in ;-)
- CDI (Weld)
- JSF 2 (facelets, ICEfaces 2)
- JPA 2 (Hibernate, Envers)
- EJB 3.1 (no-local-view, asynchronous, singletons, scheduling)
- Bean Validation (Hibernate Validator)
- JMS (MDB)
- JAX-RS (RESTEasy)
- JAX-WS
- Arqullian (incontainer-AS6)
We will pack all this in a single WAR. Just because we can (spoiler: in part IV). Noticed that apart from the component and testing frameworks, they are all standards? That's a lot of stuff. Fortunately, the appserver already provides most of the stuff so you're app will still be reasonably small.
As for the environment I'm using
- Eclipse (Galileo SR2)
- JBoss 6.0 M3
- Maven 3 (beta1)
- Sun JDK 6
- m2eclipse 0.10
This will not be your typical blog post where everything goes well - we will hit bugs. There will be curses, blood and guts and drama and
we will do workarounds and rewrites as we move along. Pretty much the same as your average day as a software developer probably looks like.
I'm also no expert in the technologies I use here so there are probably things that could be done better. Consider this more of a write-down
of my experiences in EE6-land that will probably mirror what others are going through. I will also not point you to links or additional information,
I assume that if I say
RESTEasy, you can google up more information if you are interested.
And I almost forgot: don't panic.
In the beginning: Project setup
So, lets start things off - go and download the stuff mentioned in the environment if you don't already have it. I'm not going to insult your intelligense by walking you through that (remind me to insult it later). Besides, it's pretty straightforward.
Let's make a new Maven project (File -> New -> Project... -> Maven -> Maven Project.
We skip the archetype selection and just make a simple project with group id
com.acme, artifact id
Greetings of version
1.0.0-SNAPSHOT
packed as a WAR. Now finish the wizard and now you should have a nice, perfect project. It will never be this perfect
again as our next step is adding code to it.
Maven tip-of-the-day for Windows users. Google up on how you change the path to your local repo as it might be somewhere under
Documents And Settings
which has two effects: classpath gets huge and there could be problems due to the spaces. Change it to something like c:\java\m2repo
The first thing we notice that m2eclipse has J2SE-1.4 as default. How 2002. Besides, that will make using annotations impossible so lets change that. Edit the pom.xml and throw in
<build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.1</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build>
Save and right-click the project root and go Maven -> Update Project Configuration. Aah, that's better
JSF
Let's wake up JSF. We create a folder WEB-INF in src/main/webapp and throw in a web.xml because no web app is complete without it (enforced by the maven war plugin). OK, actually this can be configured in the plugin but let's keep the web.xml since we'll need it later.
<?xml version="1.0" encoding="ISO-8859-1"?> <web-app
and an empty faces-config.xml next to it
<faces-config
and in webapp we add a greeting.xhtml like
<:outputText </h:body> </html>
Will it blend? I mean, will it deploy? Do a
mvn clean package and you should have a Greetings-1.0.0-SNAPSHOT in your projects target directory.
Throw it into the AS server/default/deploy directory and start up the server and go to
The url is not pretty, but the server port, web context root, welcome files and JSF mappings can all be tuned later, let's focus on technologies and dependencies for now. But wait - at which point did we define the JSF servlet and mappings in web.xml? We didn't. It's automagic for JSF-enabled applications.
EJB and CDI
Next step is bringing in some backing beans, let's outsource our greeting. We make a stateless EJB and use it in CDI
package com.acme.greetings;
@Stateful @Model public class GreetingBean { public String getGreeting() { return "Hello world"; } }
The @Stateful defines a stateful session EJB (3.1 since it's a POJO) and the @Model is a CDI stereotype that is @RequestScoped and @Named
(which means the lifecycle is bound to a single HTTP request and it has a name that can be referenced in EL and defaults to
greetingBean
in this case). But we have a problem - the annotations don't resolve to anything. So we need to pick them up from somewhere(tm). Fortunately
we can have all the APIs picked up for us by adding the following to our pom.xml
<dependencies> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-javaee-6.0</artifactId> <version>1.0.0.Beta4</version> <type>pom</type> <scope>provided</scope> </dependency> </dependencies>
Sun Java API artifacts are a bit amusing since getting hold of them can be a bit tricky. First they publish them in the JSR:s and then they treat them like they're top secret. Fortunately Glassfish and now JBoss have started making them available in their repositories (although under their own artifact names, but still)...
We also need to make sure we have set up the JBoss repositories for this according to.
Have a look at what happened in the projects
Maven Dependencies. Good. Now close it and back away. It's getting hairy in there so better trust
Maven to keep track of the deps from now on.
The imports should now be available in our bean so we import
import javax.ejb.Stateful; import javax.enterprise.inject.Model;
and EL-hook the bean up with
<h:body> <h:outputText </h:body>
in greetings.xhtml.
Just as no web application is complete without web.xml, no CDI application is complete without beans.xml. Let's add it to WEB-INF
<?xml version="1.0" encoding="ISO-8859-1"?> <beans xmlns="" xmlns:
Package and redeploy. We get a warning about encoding when compiling so lets add this to our pom.xml
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties>
Back to SUCCESS! No. Wait. Huge stack trace hits you for 300 points of damage. Let's back up on our EJB, there are still some issues with 3.1 style EJBs in WAR-only-packaging on AS 6 M3. Remove the @Stateful annotation and it becomes a normal CDI managed POJO. Repackage. Redploy. Recoyce.
Testing
Testing is hip nowadays so let's bring in Arquillian. Arquillian is the latest and greatest in EE testing (embedded or incontainer).
Start using it now. In a year or so when everone else catch up you can go
I've been using it since Alpha. Add the following property to pom.xml:
<arquillian.version>1.0.0.Alpha2</arquillian.version>
and these deps
<dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-junit</artifactId> <version>${arquillian.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.1</version> <scope>test</scope> </dependency>
and this profile
<profiles> <profile> <id>jbossas-local-60</id> <dependencies> <dependency> <groupId>org.jboss.arquillian.container</groupId> <artifactId>arquillian-jbossas-local-60</artifactId> <version>1.0.0.Alpha2</version> </dependency> <dependency> <groupId>org.jboss.jbossas</groupId> <artifactId>jboss-server-manager</artifactId> <version>1.0.3.GA</version> </dependency> <dependency> <groupId>org.jboss.jbossas</groupId> <artifactId>jboss-as-client</artifactId> <version>6.0.0.20100429-M3</version> <type>pom</type> </dependency> </dependencies> </profile> </profiles>
Maven will probably now download the entire internet for you.
Let's write our first test and place it in the test source folder:
package com.acme.greetings.test; import javax.inject.Inject; import org.jboss.arquillian.api.Deployment; import org.jboss.arquillian.junit.Arquillian; import org.jboss.shrinkwrap.api.ArchivePaths; import org.jboss.shrinkwrap.api.ShrinkWrap; import org.jboss.shrinkwrap.api.spec.JavaArchive; import org.jboss.shrinkwrap.impl.base.asset.ByteArrayAsset; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import com.acme.greetings.GreetingBean; @RunWith(Arquillian.class) public class GreetingTest { @Inject GreetingBean greetingBean; @Deployment public static JavaArchive createTestArchive() { return ShrinkWrap.create("test.jar", JavaArchive.class).addClass( GreetingBean.class).addManifestResource( new ByteArrayAsset("<beans/>".getBytes()), ArchivePaths.create("beans.xml")); } @Test public void testInjection() { Assert.assertEquals("Hello World", greetingBean.getGreeting()); } }
and then we try it out with
mvn test -Pjbossas-local-60. If we have the AS running we can save some time, otherwise the manager will start it
automagically. Setting the JBOSS_HOME env helps. What happens here is we use Shrinkwrap to create a deployment which consist of our GreetingBean
and an empty beans.xml file (for CDI) and the bean is then injected for use in our tests.
This concludes Part I. In part II we will set up ICEfaces and expand our application and in part III we'll set up JPA. Part IV is for MDB and EJB and part V for adding JAX-RS and JAX-WS for importing and exporting stuff. | http://in.relation.to/nicklas-karlsson/ | CC-MAIN-2016-30 | refinedweb | 7,715 | 50.53 |
have a project that has a few NSTableDataSource subclasses, and an
NSOutlineViewDataSource, which is in an NSDrawer. I assign attributes
to each of these views, some of which refer to each other, and their
respective Views. Most of the time, I am getting SIGSEGVs. Sometimes,
everything works fine, and I am able to get through a test session with
my program. If I take all the View and DataSource code out, it all
seems to work fine. Have any of you heard of something like this?
Here is how I define my DataSources:
class ProfileViewDataSource( NSObject, NSOutlineViewDataSource ):
class DownloadViewDataSource( NSObject, NSTableDataSource,
Job.JobListener, Job.JobSchedulerListener ):
class DLQueueViewDataSource( NSObject, NSTableDataSource ):
Here is the standard init( ) that I am using for these classes:
def init( self ):
self = super( ProfileViewDataSource, self ).init( )
#init code..
return self
Thank you for your time,
Sean
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/pyobjc/mailman/pyobjc-dev/?viewmonth=200308&viewday=16 | CC-MAIN-2016-40 | refinedweb | 182 | 65.93 |
import "cuelang.org/go/pkg/encoding/hex"
Decode returns the bytes represented by the hexadecimal string s.
Decode expects that src contains only hexadecimal characters and that src has even length. If the input is malformed, Decode returns the bytes decoded before the error.
DecodedLen returns the length of a decoding of x source bytes. Specifically, it returns x / 2.
Dump returns a string that contains a hex dump of the given data. The format of the hex dump matches the output of `hexdump -C` on the command line.
Encode returns the hexadecimal encoding of src.
EncodedLen returns the length of an encoding of n source bytes. Specifically, it returns n * 2.
Package hex imports 3 packages (graph) and is imported by 1 packages. Updated 2020-09-18. Refresh now. Tools for package owners. | https://godoc.org/cuelang.org/go/pkg/encoding/hex | CC-MAIN-2020-40 | refinedweb | 134 | 60.72 |
things my company does on a monthly basis is hold technology meetings to allow the consultants to get together and discuss the latest technologies. Having just come back from PDC there is definitely two technologies that stick out: LINQ and Atlas. In this month's meeting I plan on sharing what I have learned about LINQ. As I dove into some of the new featuresets in the technology preview I began to come to the conclusion that a lot of what I love about javascript is being added to .NET. Think I'm crazy? Well here is just a few examples.
Inferred types C# gets a new construct called var. No, it does not stand for variant for those of you familiar with VB. It means that the type of the variable is decided based off of the value that is set.
For examplevar i = 5; //an int is assumedvar s = "a"; //a string is assumed
While javascript only has a few inherent types (Number, Date, String, and Object) this has always been the way variables are assigned a type, in fact, the syntax is exactly the same as C#.
Dynamic IdentifiersA javascript object that has a property can be accessed in two ways.
myobject.FirstName;ormyobject["FirstName"];
The later way you may note is a string (and could be a variable as well). This type of functionality allows for many interesting and powerful use cases that is not easily accomplished in .NET.
Well the equivalent in VB.NET is now
myobject.("FirstName") 'Note the .(
Extension MethodsFor those of you who don't know, javascript is more of an emulation language than a real language. It can emulate many things, including objects and inheritance (even multiple inheritance). To define an object in javascript you would code the following.
function customer(ID) //Constructor{ this.id = ID; this.FirstName = ''; this.LastName = '';}
To instantiate the object you would write
var oCustomer = new customer(1); //note the inferred type
Lets say that this object is in a script libary that is outside of your control. Lets also say that you feel there should be another method on this object to get the full name. You could write your own script file containing the following code.
customer.prototype.GetFullName = function(){ return this.FirstName + ' ' + this.LastName;}
By importing this script library into your page you could now invoke a method that extends the original object (hence Extension method).
oCustomer.GetFullName();
You may be wondering what this has to do with .NET. Well most of the magic that LINQ brings to the framework is accomplished through something called an Extension Method. What They have done is allow for any object to be extended by simply importing a namespace that contains an extension method. (Hopefully you see the similarity to JS). Most of their samples involve extending the base IEnumerable type which most collections already implement. Lets say we have the following C# code.
using System; //Needed to make LINQ workusing System.Query; //Needed to make LINQ work
namespace CountEx{ public static class MySample { public static int MyCount(this System.Collections.IEnumerable source) { int i = 100; foreach (object item in source) i++; return i; } }}
If I have a program that contains any collection (i.e. ArrayList) I can now extend the object by simply importing (using) the CountEx namespace.
using System; //Needed to make LINQ workusing System.Query; //Needed to make LINQ workusing CountEx; //Import Extended Method(s)
static void Main(){ ArrayList oArr = new ArrayList(); Console.WriteLine(oArr.MyCount());}
Note: When working with the samples and incorrect documentation included, it took me some time to distinguish the difference between IEnumerable and IEnumerable<T>. This is mainly because these two interfaces reside in different namespaces. If you are planning on playing with the LINQ preview make sure you undersand the difference between these interfaces (i.e. IEnumerable is used in "old" collection objects whereas IEnumerable<T> is used in the new collections).
These are just a few similarities that I have seen and have time to mention. Object Initializers have similarities with JSON, the ASP like <% %> syntax has similarities with the javascript eval, and things like closures will now be possible in .NET. One thing to note is that the .NET implementation does not carry the bad baggage that javascript does. All the stuff is type checked and will generate meaningful compile time errors (eventually).
I should mention that I have been a VB programmer from the start. When I started experimenting in javascript I was amazed at what I could do, only I did not have formal terms to call them. I found it really interesting to listen to Anders Hejlsberg at PDC explain that javascript has its origins in languages like LISP/Scheme and not really java, and that these types of constructs have been there for some time.
The .NET framework is just now starting to adopt these methodologies. I find it interesting that the same time MS is focusing on this for the .NET framework they are also working on Atlas and making the support for Javascript better in VS.NET. | https://www.dnnsoftware.com/community-blog/cid/136410 | CC-MAIN-2021-17 | refinedweb | 846 | 64.41 |
OK, so i do have a perfectly good working code but im trying a different way and its bugging me why its not working. any help would be much appreciated. the program suppose to list all the available directories and then ask the user which directory he's searching for, takes the input and search for the directory. if the directory exist it will find it and list all the files in it and the path to it. if it can't find the directory it will throw a message or exception saying so.
Now it lists all the directories just fine and asks the user input but it wont go past that and thats where i'm stucked......
here's the code:
import java.io.File; import java.util.Scanner; import javax.swing.filechooser.*; /** */ public class MyMain { public static void main(String[] args) { //String dir; String keyWord = ""; Scanner keyboard = new Scanner(System.in); File[] dir = File.listRoots(); //System.out.println("Please enter the key to root directory from the list below: " /* FileSystemView.getFileSystemView().getHomeDirectory() + " " + FileSystemView.getFileSystemView().getRoots()[0] + " " + System.getProperty("user.dir") + File.listRoots() */; System.out.println("Available root directories in filesystem are : "); for(int i=0 ; i < dir.length ; i++) { System.out.println(dir[i]); } keyWord = keyboard.nextLine(); System.out.println(" Please enter the directory to scan: "); String path = keyboard.nextLine(); while(path != null){ if( rootDirectories.exists() == false){ System.out.println("Error: That directory does not exist. "); break; } if (rootDirectories.canRead() = false){ System.out.println("Error, cannot read that directory. "); break; } System.out.println("Directory is Valid. "); //test output } keyboard.close(); }
I know its incorrect but can someone give me an idea on how to correct it. thanks | https://www.daniweb.com/programming/software-development/threads/501142/java-finding-the-directories | CC-MAIN-2018-30 | refinedweb | 280 | 53.78 |
Sometimes it is not the bug itself; but how you report them that gets the smile. Take for example this test case for a bug in JDeveloper provided by fellow web service team member Alan Davis.
Create a new java file called Class1 and paste in the following text.
package project1 public class Class1 { public void method() { /** */ class Foo extends ArrayList { /* * /| |\ * | |_| | * | |_| | And for my next trick, * | . . | not only will I make the * \ @ / rabbit disappear, but * =========== I will make the hat * |_ _| vanish too... * | | * | | * |______| */ }; doIt(); // Comment } void doIt() {} }
Then accept the import assistance on the ArrayList import. Ta-daa. For weeks Alan was wondering why we kept on deleting his code... now we know it was just a magic trick. Fortunately you need a very specific combination of code structures and comments to exercise this bug so you shouldn't see this using 11 day to day.
2 comments:
Gerard, you need to give Keimpe Bronkhorst some of the credit for identifying the simplified test case (without the wildlife). Alan
Noted, Keimpe does a Stirling job keeping the java model honest for us. | http://kingsfleet.blogspot.com/2009/03/jdeveloper-code-editor-rabbit-in-hat.html | CC-MAIN-2019-13 | refinedweb | 186 | 71.44 |
DESCRIPTION
These messages are classified as follows (listed in increasing order of
desperation):
.
accept() on closed socket %s
(W closed) You tried to do an accept on a closed socket. Did you
forget to check the return value of your socket() call? See
"accept" in perlfunc.
Allocation too large: %lx
(X) You can't allocate more than 64K on an MS-DOS machine.
'%c' allowed only after types %s
(F) The modifiers '!', '<' and '>' are allowed in pack() or
unpack() only after certain types. See "pack" in perlfunc.
Ambiguous call resolved as CORE::%s(), qualify as such or use &
(W ambiguous) A subroutine you have declared has the same name as a
Perl keyword, and you have used the name without qualification for
Ambiguous range in transliteration operator
.)
Ambiguous use of %s resolved as %s
(W ambiguous)(S) You said something that may not be interpreted the
way you thought. Normally it's pretty easy to disambiguate it by
supplying a missing quote, operator, parenthesis pair or
declaration.
'|' and '<' may not both be specified on command line
(F) An error peculiar to VMS. Perl does its own command line
redirection, and found that STDIN was a pipe, and that you also
tried to redirect STDIN using '<'. Only one STDIN stream to a
customer, please.
'|' and '>' may not both be specified on command line
;
Applying %s to %s will act on scalar(%s)
(W misc).
Args must match #! line
(F) The setuid emulator requires that the arguments Perl was
invoked with match the arguments specified on the #! line. Since
some systems impose a one-argument limit on the #! line, try
combining switches; for example, turn "-w -U" into "-wU".
Arg too short for msgsnd
(F) msgsnd() requires a string at least as long as sizeof(long).
%s argument is not a HASH or ARRAY element or a subroutine
(F) The argument to exists() must be a hash or array element or a
or a hash or array slice, such as:
@foo[$bar, $baz, $xyzzy]
@{$ref->[12]}{"susie", "queue"}
%s argument is not a subroutine name
(F) The argument to exists() for "exists &sub" must be a subroutine
name, and not a subroutine call. "exists &sub()" will generate
this error.
Argument "%s" isn't numeric%s
(W numeric) The indicated string was fed as an argument to an
operator that expected a numeric value instead. If you're
fortunate the message will identify which operator was so
unfortunate.
Argument list not closed for PerlIO layer "%s"
.
Array @%s missing the @ in argument %d of %s()
(D deprecated).
A thread exited while %d threads were running
(W threads)(S) When using threaded Perl, a thread (not necessarily
the main thread) exited while there were still other threads
running. Usually it's a good idea to first collect the return
values of the created threads by joining them, and only then exit
from the main thread. See threads.
Attempt to access disallowed key '%s' in a restricted hash
(F) The failing code has attempted to get or set a key which is not
in the current set of allowed keys of a restricted hash.
reference supplied, you need to stringify it yourself, for example
by:
bless $self, "$proto";
Attempt to delete disallowed key '%s' from a restricted hash
(F) The failing code attempted to delete from a restricted hash a
key which is not in its key set.
Attempt to delete readonly key '%s' from a restricted hash
(F) The failing code attempted to delete a key whose value has been
declared readonly from a restricted hash.
Attempt to free non-arena SV: 0x%lx
(P internal) All SV objects are supposed to be allocated from
arenas that will be garbage collected on exit. An SV was
discovered to be outside any of those arenas.
Attempt to free nonexistent shared string
(P internal) Perl maintains a reference counted internal table of
strings to optimize the storage and access of hash keys and other
strings. This indicates someone tried to decrement the reference
count of a string that can no longer be found in the table.
Attempt to free temp prematurely
(W debugging) Mortalized values are supposed to be freed by the.
Attempt to free unreferenced glob pointers
(P internal) The reference counts got screwed up on symbol aliases..
Attempt to join self
(F) You tried to join a thread from within itself, which is an
impossible task. You may be joining the wrong thread, or you may
need to move the join() to some other thread.
Attempt to pack pointer to temporary value
"
reference. For example
$r = do {my @a; \$#a};
$$r = 503
Attempt to use reference as lvalue in substr
(W substr) You supplied a reference as the first argument to
substr() used as an lvalue, which is pretty strange. Perhaps you
forgot to dereference it first. See "substr" in perlfunc.
Bad arg length for %s, is %d, should be evalled substitution pattern
(F) You've used the "/e" switch to evaluate the replacement for a
substitution, but perl found a syntax error in the code to
evaluate, most likely an unexpected right brace '}'.
Bad filehandle: %s
(F) A symbol was passed to something wanting a filehandle, but the
symbol has no filehandle associated with it. Perhaps you didn't do
an open(), or did it in another package.
Bad free() ignored
().
Bad hash
(P) One of the internal hash routines was passed a null HV pointer.
Badly placed ()'s
(A) You've accidentally run your script through csh instead of
Perl. Check the #! line, or manually feed your script into Perl
yourself.
Bad name after %s::
(F) You started to name a symbol by using a package prefix, and
then didn't finish the symbol. In particular, you can't
interpolate outside of quotes, so
$var = 'myvar';
$sym = mypack::$var;
is not the same as
Bad symbol for dirhandle
(P) An internal request asked to add a dirhandle.
Bareword found in conditional
.
Bareword "%s" not allowed while "strict subs" in use
(F) With "strict subs" in use, a bareword is only allowed as a
subroutine identifier, in curly brackets or to the left of the "=>"
symbol. Perhaps you need to predeclare a subroutine?
Bareword "%s" refers to nonexistent package
(W bareword).
\1 better written as $1
(W syntax) Outside of patterns, backreferences live on as
variables. The use of backslashes is grandfathered on the right-
hand side of a substitution, but stylistically it's better to use
binmode() on closed filehandle %s
(W unopened) You tried binmode() on a filehandle that was never
opened. Check you control flow and number of arguments.
Bit vector size > 32 non-portable
(W portable) Using bit vector sizes larger than 32 is non-portable.
Bizarre copy of %s in %s
(P) Perl detected an attempt to copy an internal value that is not
copyable.
Buffer overflow in prime_env_iter: %s
(W internal) A warning peculiar to VMS. While Perl was preparing
to iterate over %ENV, it encountered a logical name or symbol
definition which was too long, so it was truncated to the string
shown.
Callback called exit
(F) A subroutine invoked from an external package via call_sv()
exited by calling exit.
%s() called too early to check prototype
.
Cannot compress integer in pack
(F) An argument to pack("w",...) was too large to compress. The
BER compressed integer format can only be used with positive
integers, and you attempted to compress Infinity or a very large
number (> 1e308). See "pack" in perlfunc.
Cannot compress negative numbers in pack
(F) An argument to pack("w",...) was negative. The BER compressed
integer format can only be used with positive integers. See "pack"
in perlfunc.
Cannot convert a reference to %s to typeglob
.
Cannot copy to %s in %s
(P) Perl detected an attempt to copy a value to an internal type
that cannot be directly assigned not.
a "given" block. You probably meant to use "next" or "last".
Can't "break" outside a given block
(F) You called "break", but you're not inside a "given" block.
Can't call method "%s" in empty package "%s"
(F) You called a method correctly, and it correctly indicated a
package functioning as a class, but that package doesn't have
ANYTHING defined in it, let alone methods. See perlobj.
perlobj. chdir to %s
(F) You called "perl -x/foo/bar", but "/foo/bar" is not a directory
that you can chdir to, possibly because it doesn't exist. coerce %s to integer in %s
(F) Certain types of SVs, in particular real symbol table entries
(typeglobs), can't be forced to stop being what they are. So you
can't say things like:
Can't coerce %s to string in %s
(F) Certain types of SVs, in particular real symbol table entries
(typeglobs), can't be forced to stop being what they are.
Can't "continue" outside a when block
(F) You called "continue", but you're not inside a "when" or
"default" block.
Can't create pipe mailbox
(P) An error peculiar to VMS. The process is suffering from
exhausted quotas or other plumbing problems.
Can't declare class for non-scalar %s in "%s"
(F) Currently, only scalar variables can be declared with a
specific class qualifier in a "my", "our" or "state" declaration.
The semantics may be extended for other types of variables in
future.
Can't declare %s in "%s"
(F) Only scalar, array, and hash variables may be declared as "my",
"our" or "state" variables. They must have ordinary identifiers as
names.
Can't do inplace edit: %s is not a regular file
(S inplace) You tried to use the -i switch on a special file, such
as a file in /dev, or a FIFO. The file was ignored.
Can't do inplace edit on %s: %s
(S inplace) The creation of the new file failed for the indicated
reason.
Can't do inplace edit without backup
(F) You're on a system such as MS-DOS that gets confused if you try
reading from a deleted (but still opened) file. {n,m} with n > m in regex; marked by <-- HERE in m/%s/
(F) Minima must be less than or equal to maxima. If you really want
your regexp to match something 0 times, just put {0}. The <-- HERE
shows in the regular expression about where the problem was
discovered. See perlre.
Can't do setegid!
(P) The setegid() call failed for some reason in the setuid
emulator of suidperl.
Can't do seteuid!
(P) The setuid emulator of suidperl failed for some reason.
waitpid() without flags is emulated.
Can't emulate -%s on #! line
(F) The #! line specifies a switch that doesn't make sense at this
point. For example, it'd be kind of silly to put a -x on the #!
line.
Can't %s %s-endian %ss on this platform
(F) Your platform's byte-order is neither big-endian nor little-
endian, or it has a very strange pointer size. Packing and
unpacking big- or little-endian floating point values and pointers
may not be possible. See "pack" in perlfunc.
Can't exec " an opnumber for "%s"
(F) A string of a form "CORE::word" was given to prototype(), but
there is no builtin with the name "word".
Can't find %s character property "%s"
(F) You used "\p{}" or "\P{}" but the character property by that
name could not be found. Maybe you misspelled the name of the
property (remember that the names of character properties consist
only of alphanumeric characters), or maybe you forgot the "Is" or
"In" prefix?
Can't find label %s
(F) You said to goto a label that isn't mentioned anywhere that
it's possible for us to go to. See "goto" in perlfunc.
Can't find %s on PATH
(F) You used the -S switch, but the script to execute could not be
found in the PATH.
Can't find %s on PATH, '.' not in PATH
(F) You used the -S switch, but the script to execute could not be
found in the PATH, or at least not with the correct permissions.
The script exists in the current directory, but PATH prohibits
programmer's editor will have a way to help you find these
characters.
Can't find Unicode property definition "%s"
(F) You may have tried to use "\p" which means a Unicode property
(for example "\p{Lu}" is all uppercase letters). If you did mean
to use a Unicode property, see perlunicode for the list of known
properties. If you didn't mean to use a Unicode property, escape
the "\p", either by "\\p" (just the "\p") or by "\Q\p" (the rest of
the string, until possible "\E").
Can't fork
(F) A fatal error occurred while trying to fork while opening a
pipeline.
Can't get filespec - stale stat buffer?
(S) A warning peculiar to VMS. This arises because of the
difference between access checks under VMS and under the Unix model
Perl assumes. Under VMS, access checks are done by filename,
rather than by bits in the stat buffer, so that ACLs and other
protections can be taken into account. Unfortunately, Perl assumes
that the stat buffer contains all the necessary information, and
passes it, instead of the filespec, to the access checking routine.
It will try to retrieve the filespec using the device name and FID
present in the stat buffer, but this works only if you haven't made
a subsequent call to the CRTL.)I how big you want
your mailbox buffers to be, and didn't get an answer.
Can't "goto" into the middle of a foreach loop
(F) A "goto" statement was executed to jump into the middle of a
foreach loop. You can't get there from here. See "goto" in
perlfunc.
Can't "goto" out of a pseudo block
(F) A "goto" statement was executed to jump out of what might look
like a block, except that it isn't a proper block. This usually
occurs if you tried to jump out of a sort() block or subroutine,
which is a no-no. See "goto" in perlfunc.
Can't goto subroutine from a sort sub (or similar callback)
routine anyway. See "goto" in perlfunc.
Can't ignore signal CHLD, forcing to default
"last" outside a loop block
(F) A "last" statement was executed to break out of the current
block, except that there's this itty bitty problem called "last" in perlfunc.
Can't linearize anonymous symbol table
(F) Perl tried to calculate the method resolution order (MRO) of a
package, but failed because the package stash has no name.
Can't load '%s' for module .
Can't localize lexical variable %s
(F) You used local on a variable name that was previously declared
as a lexical variable using "my" or "state". This is not allowed.
If you want to localize a package variable of the same name,
qualify it with the package name.
Can't localize through a reference
(F) You said something like "local $$ref", which Perl can't
currently handle, because when it goes to restore the old value of
whatever $ref pointed to after the scope of the local() is
finished, it can't be sure that $ref will still be a reference.
Can't locate "require" in perlfunc and
lib.
(F) You called a method correctly, and it correctly indicated a
package functioning as a class, but that package doesn't define
that particular method, nor does any of its base classes. See
perlobj.
Can't locate package %s for @%s::ISA
(W syntax) The @ISA array contained the name of another package
that doesn't seem to exist.
Can't locate PerlIO%s
(F) You tried to use in open() a PerlIO layer that does not exist,
e.g. open(FH, ">:nosuchlayer", "somefile").
Can't make list assignment to \%ENV on this system
(F) List assignment to %ENV is modify non-lvalue subroutine call
(F) Subroutines meant to be used in lvalue context should be
declared as such, see "Lvalue subroutines" in perlsub.
Can't msgrcv to read-only var
(F) The target of a msgrcv must be modifiable to be used as a
receive buffer.
Can't "next" outside a loop block
(F) A "next" statement was executed to reiterate "next" in perlfunc.
Can't open %s: %s
(S inplace) The implicit opening of a file through use of the "<>"
filehandle, either implicitly under the "-n" or "-p" command-line
switches, or explicitly, failed for the indicated reason. Usually
this is because you don't have read permission for a file which you
named on the command line.
Can't open a reference
(W io) You tried to open a scalar reference for reading or writing,
using the 3-arg open() syntax :
open FH, '>', $ref;
(F) The script you specified can't be opened for the indicated
reason.
If you're debugging a script that uses #!, and normally relies on
the shell's $PATH search, the -S option causes perl to do that
search, so you don't have to type the path or "`which
$scriptname`". "redo" outside a loop block
(F) A "redo" statement was executed to restart "redo" in perlfunc.
Can't remove %s: %s, skipping file
(S inplace) You requested an inplace edit without creating a backup
file. Perl was unable to remove the original file to replace it
with the modified file. The file was left unmodified.
Can't rename %s to %s: %s, skipping file
(S inplace) The rename done by the -i switch failed for some
reason, probably because you don't have write permission to the
directory.
Can't reopen input pipe (name: %s) in binary mode
(P) An error peculiar to VMS. Perl thought stdin was a pipe, and
(F) Perl detected an attempt to return illegal lvalues (such as
temporary or readonly values) from a subroutine used as an lvalue.
This is not allowed.
Can't return outside a subroutine
(F) The return statement was executed in mainline code, that is,
where there was no subroutine call to return out of. See perlsub.
Can't return %s to lvalue scalar context
(F) You tried to return a complete array or hash from an lvalue
subroutine, but you called the subroutine in a way that made Perl
think you meant to return only one value. You probably meant to
write parentheses around the call to the subroutine, which tell
Perl that the call should be in list context. %s (%d) to %d
(P) The internal sv_upgrade routine adds "members" to an SV, making
it into a more specialized kind of SV. The top several SV types
are so specialized, however, that they cannot be interconverted.
This message indicates that such a conversion was attempted.
Can't use anonymous symbol table for method lookup
(F) The internal routine that does method lookup was handed a
symbol table that doesn't have a name. Symbol tables can become
anonymous for example by undefining stashes: "undef
(F) The first time the %! hash is used, perl automatically loads
the Errno.pm module. The Errno module is expected to tie the %!
hash to provide symbolic names for $! errno values.
Can't use both '<' and '>' after type '%c' in %s
(F) A type cannot be forced to have both big-endian and little-
endian byte-order at the same time, so this combination of
modifiers is not allowed. See "pack" in perlfunc.
Can't use %s for loop variable
(F) Only a simple scalar variable may be used as a loop variable on
a foreach.
Can't use global %s in "%s"
'%c' in a group with different byte-order in %s
(F) You attempted to force a different byte-order on a type that is
already inside a group with a byte-order modifier. For example you
cannot force little-endianness on a type that is inside a big-
endian group.
Can't use "my %s" in sort comparison
.
Can't use %s ref as %s ref
(F) You've mixed up your reference types. You have to dereference
a reference of the type needed. You can use the ref() function to
test the type of the reference, if need be.
Can't use string ("%s") as %s ref while "strict refs" in use
(F) Only hard references are allowed by "strict refs". Symbolic
references are disallowed. See perlref.
Can't use subscript on %s
(F) The compiler tried to interpret a bracketed expression as a
subscript. But to the left of the brackets was an expression that
didn't look like a hash or array reference, or anything else
subscriptable.
Can't use \%c to mean $%c in expression
(W syntax)
Can't x= to read-only value
(F) You tried to repeat a constant value (often the undefined
value) with an assignment operator, which implies modifying the
value itself. Perhaps you need to copy the value to a temporary,
and repeat that.
Character in 'C' format wrapped in pack
(W pack) You said
pack("C", $x)
where $x is either less than 0 or more than 255; 'W' format wrapped in pack
(W pack) You said
pack("U0W", $x)
where $x is either less than 0 or more than 255. However, "U0"-mode
expects all values to fall in the interval [0, 255], so Perl
behaved as if you meant:
pack("U0W", $x & 255)
Character in 'c' format wrapped in pack
(W pack) You said
pack("c", $x)
where $x is either less than -128 or more than 127; '%c' format wrapped in unpack
(W unpack) You tried something like
unpack("H", "\x{2a1}")
where the format expects to process a byte (a character with a
value below 256), but a higher value was provided instead. Perl
you had provided:
pack("u", "\x{f3}b")
Character(s) in '%c' format wrapped in unpack
(W unpack) You tried something like")
close() on unopened filehandle %s
(W unopened) You tried to close a filehandle that was never opened.
closedir() attempted on invalid dirhandle %s
(W io) The dirhandle you tried to close is either closed or not
really a dirhandle. Check your control flow.
Code missing after '/'
(F) You had a (sub-)template that ends with a '/'. There must be
another template code following the slash. See "pack" in perlfunc.
%s: Command not found
(A) You've accidentally run your script through csh instead of
Perl. Check the #! line, or manually feed your script into Perl
yourself.
Compilation failed in require
(F) Perl could not compile a file specified in a "require"
statement. Perl uses this generic message when none of the errors
that it encountered were severe enough to halt compilation
immediately.
Complex regular subexpression recursion limit (%d) exceeded
(W regexp) perlfaq2 for information on Mastering
Regular Expressions.)
cond_broadcast() called on unlocked variable
(W threads) Within a thread-enabled program, you tried to call
cond_broadcast() on a variable which wasn't locked. The
cond_broadcast() function is used to wake up another thread that
signaling thread to first wait for a lock on variable. This lock
attempt will only succeed after the other thread has entered
cond_wait() and thus relinquished the lock.
connect() on closed socket %s
(W closed) You tried to do a connect on a closed socket. Did you
forget to check the return value of your socket() call? See
"connect" in perlfunc.
Constant(%s)%s: %s
(F) The parser found inconsistencies either while attempting to
define an overloaded constant, or when trying to find the character
name specified in the "\N{...}" escape. Perhaps you forgot to load
the corresponding "overload" or "charnames" pragma? See charnames
and overload.
Constant(%s)%s: %s in regex; marked by <-- HERE in m/%s/
(F) The parser found inconsistencies while attempting to find the
character name specified in the "\N{...}" escape. Perhaps you
forgot to load the corresponding "charnames" pragma? See
charnames.
Constant is not %s reference
.
Constant subroutine %s redefined
(S) You redefined a subroutine which had previously been eligible
for inlining. See "Constant Functions" in perlsub for commentary
and workarounds.
Constant subroutine %s undefined
(W misc) You undefined a subroutine which had previously been
eligible for inlining. See "Constant Functions" in perlsub for
commentary and workarounds.
Copy method did not return a reference
(F) The method which overloads "=" is buggy. See "Copy Constructor"
in overload.
CORE::%s is not a keyword
(F) The CORE:: namespace is reserved for Perl keywords.
corrupted regexp pointers
(P) The regular expression engine got confused by what the regular
expression compiler gave it.
corrupted regexp program
(P) The regular expression engine got passed a regexp program
without a valid magic number.
indirectly) 100 times more than it has returned. This probably
indicates an infinite recursion, unless you're writing strange
benchmark programs, in which case it indicates something else.
This threshold can be changed from 100, by recompiling the perl
binary, setting the C pre-processor macro "PERL_SUB_DEPTH_WARN" to
the desired value.
defined(@array) is deprecated
(D deprecated) defined() is not usually useful on arrays because it
checks for an undefined scalar value. If you want to see if the
array is empty, just use "if (@array) { # not empty }" for example.
defined(%hash) is deprecated
(D deprecated) defined() is not usually useful on hashes because it
checks for an undefined scalar value. If you want to see if the
hash is empty, just use "if (%hash) { # not empty }" for example.
%s defines neither package nor VERSION--version check failed
(F) You said something like "use Module 42" but in the Module file
there are neither package declarations nor a $VERSION.
Delimiter for here document is too long
(F) In a here document construct like "<<FOO", the label "FOO" is
too long for Perl to handle. You have to be seriously twisted to
write code that triggers this error.
Deprecated use of my() in false conditional
(D deprecated). You can achieve
a similar static effect by declaring the variable in a separate
block outside the function, eg
sub f { my $x if 0; return $x++ }
becomes
{ my $x; sub f { return $x++ } }
Beginning with perl 5.9.4, you can also use "state" variables to
have lexicals that are initialized only once (see feature):
sub f { state $x; return $x++ }
DESTROY created new reference to dead object '%s'
(F) A DESTROY() method created a new reference to the object which
is just being DESTROYed. Perl is confused, and prefers to abort
rather than to create a dangling reference.
Did not produce a valid header
(Did you mean "local" instead of "our"?)
(W misc) Remember that "our" does not localize the declared global
variable. You have declared it again in the same lexical scope,
which seems superfluous.
(Did you mean $ or @ instead of %?)
(W) You probably said %hash{$key} when you meant $hash{$key} or
@hash{@keys}. On the other hand, maybe you just meant %hash and
got carried away.
Died
(F) You passed die() an empty string (the equivalent of "die """)
or you called it with no args and both $@ and $_ were empty.
Document contains no data
See Server error.
%s does not define %s::VERSION--version check failed
(F) You said something like "use Module 42" but the Module did not
define a "$VERSION."
'/' does not take a repeat count
(F) You cannot put a repeat count of any kind right after the '/'
code. See "pack" in perlfunc.
Don't know how to handle magic of type '%s'
(P) The internal handling of magical variables has been cursed.
do_study: out of memory
(P) This should have been caught by safemalloc() instead.
(Do you need to predeclare %s?)
(S syntax).
dump() better written as CORE::dump()
(W misc) You used the obsolescent "dump()" built-in function,
without fully qualifying it as "CORE::dump()". Maybe it's a typo.
See "dump" in perlfunc.
dump is not supported
(F) Your machine doesn't support dump/undump.
Duplicate free() ignored
(S malloc) An internal routine called free() on something that had
already been freed.
as described in perlunicode and perlre. You used "\p" or "\P" in a
regular expression without specifying the property name.
entering effective %s failed
(F) While under the "use filetest" pragma, switching the real and
effective uids or gids failed.
%ENV is aliased to %s
(F) You're running under taint mode, and the %ENV variable has been
aliased to another hash, so it doesn't reflect anymore the state of
the program's environment. This is potentially insecure. "(?{ code })" in perlre, and perlsec.
%s: Eval-group not allowed at runtime, use re 'eval'
.
%s: Eval-group not allowed, use re 'eval'
(F) A regular expression contained the "(?{ ... })" zero-width
assertion, but that construct is only allowed when the "use re
'eval'" pragma is in effect. See "(?{ code })" in perlre.
EVAL without pos change exceeded limit in regex; marked by <-- HERE in
m/%s/
(F) You used a pattern that nested too many EVAL calls without
consuming any text. Restructure the pattern so that text is
consumed.
The <-- HERE shows in the regular expression about where the
problem was discovered..
exec? I'm not *that* kind of operating system
(F) The "exec" function is not implemented in MacPerl. See
Exiting pseudo-block via %s
(W exiting) You are exiting a rather special block construct (like
a sort block or subroutine) by unconventional means, such as a
goto, or a loop control statement. See "sort" in perlfunc.
Exiting subroutine via %s
(W exiting) You are exiting a subroutine by unconventional means,
such as a goto, or a loop control statement.
Exiting substitution via %s
(W exiting) You are exiting a substitution by unconventional means,
such as a return, a goto, or a loop control statement.
Explicit blessing to '' (assuming package main)
(W misc) You are blessing a reference to a zero length string.
This has the effect of blessing the reference into the package
main. This is usually not what you want. Consider providing a
default target package, e.g. bless($ref, $p || 'MyPackage');
%s: Expression syntax
(A) You've accidentally run your script through csh instead of
Perl. Check the #! line, or manually feed your script into Perl
yourself.
%s failed--call queue aborted
(F) An untrapped exception was raised while executing a UNITCHECK,
CHECK, INIT, or END subroutine. Processing of the remainder of the
queue of such routines has been prematurely ended.
False [] range "%s" in regex; marked by <-- HERE in m/%s/
(W regexp) A character class range must start and end at a literal
character, not another character class like "\d" or "[:alpha:]".
The "-" in your false range is interpreted as a literal "-".
Consider quoting the "-", "\-". The <-- HERE shows in the regular
expression about where the problem was discovered. See perlre.?
FETCHSIZE returned a negative value
(F) A tied array claimed to have a negative number of elements,
which is not possible.
Field too wide in 'u' format in pack
(W pack) Each line in an uuencoded string start with a length
indicator which can't encode values above 63. So there is no point
If you intended it to be a read/write filehandle, you needed to
open it with "+<" or "+>" or "+>>" instead of with "<" or nothing.
If you intended only to read from the file, use "<". See "open" in
perlfunc. Another possibility is that you attempted to open
filedescriptor 0 (also known as STDIN) for output (maybe you closed
STDIN earlier?).
Filehandle %s reopened as %s only for input
(W io) You opened for reading a filehandle that got the same
filehandle id as STDOUT or STDERR. This occurred because you closed
STDOUT or STDERR previously.
Filehandle STDIN reopened as %s only for output
(W io) You opened for writing a filehandle that got the same
filehandle id as STDIN. This occurred because you closed STDIN
previously..
flock() on closed filehandle %s
(W closed) The filehandle you're attempting to flock() got itself
closed some time before now. Check your control flow. flock()
operates on filehandles. Are you attempting to call flock() on a
dirhandle by the same name?
Format not terminated
(F) A format must be terminated by a line with a solitary dot.
Perl got to the end of your file without finding such a line.
Format %s redefined
(W redefine) You redefined a format. To suppress this warning, say
{
no warnings 'redefine';
eval "format NAME =...";
}
Found = in conditional, should be ==
(W syntax) You said
if ($foo = 123)
when you meant
if ($foo == 123)
(or something like that).
%s found where operator expected
(S syntax) The Perl lexer knows whether to expect a term or an
get%sname() on closed socket %s
(W closed)" underlying
the "getpwnam" operator returned an invalid UIC.
getsockopt() on closed socket %s
(W closed) You tried to get a socket option on a closed socket.
Did you forget to check the return value of your socket() call?
See "getsockopt" in perlfunc. "::").
glob failed (%s)
(W glob) Something went wrong with the external program(s) used for
"glob" and "<*.c>". Usually, this means that you supplied a "glob"
pattern that caused the external program to fail and exit with a
nonzero status. If the message indicates that the abnormal exit
resulted in a coredump, this may.
Glob not terminated
".
Got an error from DosAllocMem
(P) An error peculiar to OS/2. Most probably you're using an
obsolete version of Perl, and this should not happen anyway.
goto must have label
(F) Unlike with "next" or "last", you're not allowed to goto an
unspecified destination. See "goto" in perlfunc.
()-group starts with a count
(F) A ()-group started with a count. A count is supposed to follow
something: a template character or a ()-group.
See "pack" in perlfunc.
%s has too many errors
(F) The parser has given up trying to parse the program after 10
errors. Further error messages would likely be uninformative.
Hexadecimal number > 0xffffffff non-portable
(W portable) The hexadecimal number you specified is larger than
2**32-1 (4294967295) and therefore non-portable between systems.
See perlport for more on portability concerns.
Identifier too long
(F) Perl limits identifiers (names for variables, functions, etc.)
to about 250 characters for simple names, and somewhat more for
compound names (like $A::B). You've exceeded Perl's limits.
Future versions of Perl are likely to eliminate these arbitrary
limitations.
Ignoring %s in character class in regex; marked by <-- HERE in m/%s/
(W) Named Unicode character escapes (\N{...}) may return multi-char
or zero length sequences. When such an escape is used in a
character class its behaviour is not well defined. Check that the
correct escape has been used, and the correct charname handler is
in scope.
Illegal binary digit %s
(F) You used a digit other than 0 or 1 in a binary number.
Illegal binary digit %s ignored
(W digit) You may have tried to use a digit other than 0 or 1 in a
binary number. Interpretation of the binary number stopped before
the offending digit.
Illegal character %s (carriage return)
(F) Perl normally treats carriage returns in the program text as it
would any other whitespace, which means you should never see this
error when Perl was built using standard options. For some reason,
your version of Perl appears to have been built without this
support. Talk to your Perl administrator.
Illegal character in prototype for %s : %s
(W syntax) An illegal character was found in a prototype
declaration. Legal characters in prototypes are $, @, %, *, ;, [,
], &, and \.
Illegal declaration of anonymous subroutine
(F) When using the "sub" keyword to construct an anonymous
subroutine, you must always specify a block of code. See perlsub.
Illegal declaration of subroutine %s
(F) A subroutine was not declared correctly. See perlsub.
Illegal division by zero
(F) You tried to divide a number by 0. Either something was wrong
in your logic, or you need to put a conditional in to guard against
meaningless input.
power of two from 1 to 32 (or 64, if your platform supports that).
Illegal octal digit %s
(F) You used an 8 or 9 in an octal number.
Illegal octal digit %s ignored
(W digit) You may have tried to use an 8 or 9 in an octal number.
Interpretation of the octal number stopped before the 8 or 9.
Illegal switch in PERL5OPT: %s
(X) The PERL5OPT environment variable may only be used to set the
following switches: -[CDIMUdmtw].
Ill-formed CRTL environ value "%s"
(W internal) A warning peculiar to VMS. Perl tried to read the
CRTL's internal environ array, and encountered an element without
the "=" delimiter used to separate keys from values. The element
is ignored.
Ill-formed message in prime_env_iter: |%s|
(W internal) A warning peculiar to VMS. Perl tried to read a
logical name or CLI symbol definition when preparing to iterate
over %ENV, and didn't see the expected delimiter between key and
value, so the line was ignored.
(in cleanup) %s
.
Inconsistent hierarchy during C3 merge of class '%s': merging failed on
parent '%s'
(F) The method resolution order (MRO) of the given class is not
C3-consistent, and you have enabled the C3 MRO for this class. See
the C3 documentation in mro for more information.
In EBCDIC the v-string components cannot exceed 2147483647
(F) An error peculiar to EBCDIC. Internally, v-strings are stored
as Unicode code points, and encoded in EBCDIC as UTF-EBCDIC. The
UTF-EBCDIC encoding is limited to code points no larger than
2147483647 (0x7FFFFFFF).
Infinite recursion in regex; marked by <-- HERE in m/%s/
(F) You used a pattern that references itself without consuming any
input text. You should check the pattern to ensure that recursive
patterns either consume text or fail.
The <-- HERE shows in the regular expression about where the. See perlsec for more information.
Insecure directory in %s
(F) You can't use system(), exec(), or a piped open in a setuid or
setgid script if $ENV{PATH} contains a directory that is writable
by the world. Also, the PATH must not contain any relative
directory. See perlsec.
Insecure $ENV{%s} while running %s
(F) You can't use system(), exec(), or a piped open in a setuid or
setgid script if any of $ENV{PATH}, $ENV{IFS}, $ENV{CDPATH},
$ENV{ENV}, $ENV{BASH_ENV} or $ENV{TERM} are derived from data
supplied (or potentially supplied) by the user. The script must
set the path to a known value, using trustworthy data. See
perlsec.
Integer overflow in %s number
.
Integer overflow in format string for %s
(F) The indexes and widths specified in the format string of
"printf()" or "sprintf()" are too large. The numbers must not
overflow the size of integers for your architecture.
Integer.
Internal disaster in regex; marked by <-- HERE in m/%s/
(P) Something went badly wrong in the regular expression parser.
The <-- HERE shows in the regular expression about where the
problem was discovered.
Internal inconsistency in tracking vforks
(S) A warning peculiar to VMS. Perl keeps track of the number of
times you've called "fork" and "exec", to determine whether the
current call to "exec" should affect the current script or a
subprocess (see "exec LIST" in perlvms). Somehow, this count has
the list operators arguments found inside the parentheses. See
"Terms and List Operators (Leftward)" in perlop. conversion in %s: "%s"
(W printf) Perl does not understand the given format conversion.
See "sprintf" in perlfunc.
Invalid escape in the specified encoding in regex; marked by <-- HERE
in m/%s/
(W regexp) The numeric escape (for example "\xHH") of value < 256
didn't correspond to a single character through the conversion from
the encoding specified by the encoding pragma. The escape was
replaced with REPLACEMENT CHARACTER (U+FFFD) instead. The <-- HERE
shows in the regular expression about where the escape was
discovered.
Invalid mro name: '%s'
(F) You tried to "mro::set_mro("classname", "foo")" or "use mro
'foo'", where "foo" is not a valid method resolution order (MRO).
(Currently, the only valid ones are "dfs" and "c3"). See mro.
Invalid [] range "%s" in regex; marked by <-- HERE in m/%s/
(F) The range specified in a character class had a minimum
character greater than the maximum character. One possibility is
that you forgot the "{}" from your ending "\x{}" - "\x" without the
curly braces can go only up to "ff". The <-- HERE shows in the
regular expression about where the problem was discovered. See
perlre.
Invalid range "%s" in transliteration operator
(F) The range specified in the tr/// or y/// operator had a minimum
character greater than the maximum character. See perlop. PerlIO layer specification %s
(W layer) When pushing layers onto the Perl I/O system, something
other than a colon or whitespace was seen between the elements of a
layer list. If the previous attribute had a parenthesised
parameter list, perhaps that list was terminated too soon.
Invalid type '%s' in %s
ioctl is not implemented
(F) Your machine apparently doesn't implement ioctl(), which is
pretty strange for a machine that supports C.
ioctl() on unopened %s
(W unopened) You tried ioctl() on a filehandle that was never
opened. Check you control flow and number of arguments.
IO layers (like "%s") unavailable
(F) Your Perl has not been configured to have PerlIO, and therefore
you cannot use IO layers. To have PerlIO Perl must be configured
with 'useperlio'.
IO::Socket::atmark not implemented on this architecture
(F) Your machine doesn't implement the sockatmark() functionality,
neither as a system call or an ioctl call (SIOCATMARK).
$* is no longer supported
(S deprecated, syntax) The special variable $*, deprecated in older
perls, has been removed as of 5.9.0 and is no longer supported. In
previous versions of perl the use of $* enabled or disabled multi-
line matching within a string.
Instead of using $* you should use the "/m" (and maybe "/s") regexp
modifiers. (In older versions: when $* was set to a true value then
all regular expressions behaved as if they were written using
"/m".)
$# is no longer supported
(S deprecated, syntax) The special variable $#, deprecated in older
perls, has been removed as of 5.9.3 and is no longer supported. You
should use the printf/sprintf functions instead.
`%s' is not a code reference
(W overload) The second (fourth, sixth, ...) argument of
overload::constant needs to be a code reference. Either an
anonymous subroutine, or a reference to a subroutine.
`%s' is not an overloadable type
(W overload) You tried to overload a constant type the overload
package is unaware of.
junk on end of regexp
(P) The regular expression parser is confused.
Label not found for "last %s"
(F) You named a loop to break out of, but you're not currently in a
loop of that name, not even if you count where you were called
from. See "last" in perlfunc.
Label not found for "next %s"
(F) You named a loop to continue, but you're not currently in a
loop of that name, not even if you count where you were called
(F) While unpacking, the string buffer was already used up when an
unpack length/code combination tried to obtain more data. This
results in an undefined value for the length. See "pack" in
perlfunc.
listen() on closed socket %s
(W closed) You tried to do a listen on a closed socket. Did you
forget to check the return value of your socket() call? See
"listen" in perlfunc.
Lookbehind longer than %d not implemented in regex m/%s/
(F) There is currently a limit on the length of string which
lookbehind can handle. This restriction may be eased in a future
release.
lstat() on filehandle %s
(W io) You tried to do an lstat on a filehandle. What did you mean
by that? lstat() makes sense only on filenames. (Perl did a
fstat() instead on the filehandle.)
Lvalue subs returning %s not implemented yet
(F) Due to limitations in the current implementation, array and
hash values cannot be returned in subroutines used in lvalue
context. See "Lvalue subroutines" in perlsub.
Malformed integer in [] in pack
(F) Between the brackets enclosing a numeric repeat count only
digits are permitted. See "pack" in perlfunc.
Malformed integer in [] in unpack
(F) Between the brackets enclosing a numeric repeat count only
digits are permitted. See "pack" in perlfunc. perlos2.
Malformed prototype for %s: %s
(F) You tried to use a function with a malformed prototype. The
syntax of function prototypes is given a brief compile-time check
for obvious errors like invalid characters. A more rigorous check
is run when the function is called.
Malformed UTF-8 character (%s)
See also "Handling Malformed Data" in Encode.
Malformed UTF-16 surrogate
Perl thought it was reading UTF-16 encoded character data but while
doing it Perl met a malformed Unicode surrogate.
Malformed UTF-8 string in pack
(F) You tried to pack something that didn't comply with UTF-8
encoding rules and perl was unable to guess how to make more
progress.
Malformed UTF-8 string in unpack
(F) You tried to unpack something that didn't comply with UTF-8
encoding rules and perl was unable to guess how to make more
progress.
Malformed UTF-8 string in '%c' format in unpack
(F) You tried to unpack something that didn't comply with UTF-8
encoding rules and perl was unable to guess how to make more
progress.
Maximal count of pending signals (%s) exceeded
(F) Perl aborted due to a too important number of signals pending.
This usually indicates that your operating system tried to deliver
signals too fast (with a very high priority), starving the perl
process from resources it would need to reach a point where it can
process signals safely. (See "Deferred Signals (Safe Signals)" in
perlipc.)
%s matches null string many times in regex; marked by <-- HERE in m/%s/
(W regexp) The pattern you've specified would be an infinite loop
if the regular expression engine didn't specifically check for
that. The <-- HERE shows in the regular expression about where the
problem was discovered. See perlre.
"%s" may clash with future reserved word
(W) This warning may be due to running a perl5 script through a
perl4 interpreter, especially if the word that is being warned
about is "use" or "my".
% may not be used in pack
(F) You can't pack a string by supplying a checksum, because the
checksumming process loses information, and you can't go the other
way. See "unpack" in perlfunc.
Method for operation %s not found in package %s during blessing
(F) An attempt was made to specify an entry in an overloading table
that doesn't resolve to a valid subroutine. See overload.
Method %s not permitted
See Server error.
Might be a runaway multi-line %s string starting on line %d
Missing %sbrace%s on \N{}
(F) Wrong syntax of character name literal "\N{charname}" within
double-quotish context.
Missing comma after first argument to %s function
(F) While certain functions allow you to specify a filehandle or an
"indirect object" before the argument list, this ain't one of them.
Missing command in piped open
(W pipe) You used the "open(FH, "| command")" or "open(FH, "command
|")" construction, but the command was missing or blank.
Missing control char name in \c
(F) A double-quoted string ended with "\c", without the required
control character name.
Missing name in "my sub"
(F) The reserved syntax for lexically scoped subroutines requires
that they have a name with which they can be found.
Missing $ on loop variable
(F) Apparently you've been programming in csh too much. Variables
are always mentioned with the $ in Perl, unlike in the shells,
where it can vary from one line to the next.
(Missing operator before %s?)
(S syntax) This is an educated guess made in conjunction with the
message "%s found where operator expected". Often the missing
operator is a comma.
Missing right brace on %s
(F) Missing right brace in "\x{...}", "\p{...}" or "\P{...}".
Missing right curly or square bracket
(F) The lexer counted more opening curly or square brackets than
closing ones. As a general rule, you'll find it's missing near the
place you were last editing.
(Missing semicolon on previous line?)
(S syntax) This is an educated guess made in conjunction with the
message "%s found where operator expected". Don't automatically
put a semicolon on the previous line just because you saw this
Modification of a read-only value attempted
(F) You tried, directly or indirectly, to change the value of a
constant. You didn't, of course, try "2 = 1", because the compiler
catches that. But an easy way to do the same thing is:
sub mod { $_[0] = 1 }
mod(2);
Another way is to assign to a substr() that's off the end of the
string.
array backwards.
Modification of non-creatable hash value attempted, %s
(P) You tried to make a hash value spring into existence, and it
couldn't be created for some peculiar reason.
Module name must be constant
(F) Only a bare module name is allowed as the first argument to a
"use".
Module name required with -%c option
(F) The "-M" or "-m" options say that Perl should load some module,
but you omitted the name of the module. Consult perlrun for full
details about "-M" and "-m".
More than one argument to open
(F) The "open" function has been asked to open multiple files. This
can happen if you are trying to open a pipe to a command that takes
a list of arguments, but have forgotten to specify a piped open
mode. See "open" in perlfunc for details.
msg%s not implemented
(F) You don't have System V message IPC on your system.
Multidimensional syntax %s not supported
(W syntax) Multidimensional arrays aren't written like $foo[1,2,3].
They're written like $foo[1][2][3], as in C.
'/' must follow a numeric type in unpack
(F) You had an unpack template that contained a '/', but this did
not follow some unpack specification producing a numeric value.
See "pack" in perlfunc.
"my sub" not yet implemented
(F) Lexically scoped subroutines are not yet implemented. Don't
try that yet.
"%s" variable %s can't be in a package
(F) Lexically scoped variables aren't in a package, so it doesn't
make sense to try to declare one with a package qualifier on the
front. Use local() if you want to localize a package variable.
Name "%s::%s" used only once: possible typo
.
Negative '/' count in unpack
(F) You can't quantify a quantifier without intervening
parentheses. So things like ** or +* or ?* are illegal. The <--
HERE shows in the regular expression about where the problem was
discovered.
Note that the minimal matching quantifiers, "*?", "+?", and "??"
appear to be nested quantifiers, but aren't. See perlre.
%s never introduced
(S internal) The symbol in question was declared but somehow went
out of scope before it could possibly have been used.
next::method/next::can/maybe::next::method cannot find enclosing method
(F) "next::method" needs to be called within the context of a real
method in a real package, and it could not find such a context.
See mro.
No %s allowed while running setuid
(F) Certain operations are deemed to be too insecure for a setuid
or setgid script to even be allowed to attempt. Generally speaking
there will be another way to do what you want that is, if not
secure, at least securable. See perlsec.
No comma allowed after %s
(F) A list operator that has a filehandle or "indirect object" is
not allowed to have a comma between that and the following
arguments. Otherwise it'd be just another one of the arguments." in perlfunc and "import" in perlfunc.?
No command into which to pipe on command line
(F) An error peculiar to VMS. Perl handles its own command line
redirection, and found a '|' at the end of the command line, so it
doesn't know where you want to pipe the output from this command.
No DB::DB routine defined
(F) The currently executing code was compiled with the -d switch,
but for some reason the current debugger (e.g. perl5db.pl or a
"Devel::" module) didn't define a routine to be called at the
beginning of each statement.
No dbm on this machine
(P) This is counted as an internal error, because every machine
should supply dbm nowadays, because Perl comes with SDBM. See
(F) An error peculiar to VMS. Perl handles its own command line
redirection, and found a '2>' or a '2>>' on the command line, but
can't find the name of the file to which to write data destined for
stderr.
No group ending character '%c' found in template
(F) A pack or unpack template has an opening '(' or '[' without its
matching counterpart. See "pack" in perlfunc.
No input file after < on command line
(F) An error peculiar to VMS. Perl handles its own command line
redirection, and found a '<' on the command line, but can't find
the name of the file from which to read data for stdin.
No #! line
(F) The setuid emulator requires that scripts have a well-formed #!
line even on machines that don't support the #! construct.
No next::method '%s' found for %s
(F) "next::method" found no further instances of this method name
in the remaining packages of the MRO of this class. If you don't
want it throwing an exception, use "maybe::next::method" or
"next::can". See mro.
"no" not allowed in expression
(F) The "no" keyword is recognized and executed at compile time,
and returns no useful value. See perlmod.
No output file after > on command line
(F) An error peculiar to VMS. Perl handles its own command line
redirection, and found a lone '>' at the end of the command line,
so it doesn't know where you wanted to redirect stdout.
No output file after > or >> on command line
(F) An error peculiar to VMS. Perl handles its own command line
redirection, and found a '>' or a '>>' on the command line, but
can't find the name of the file to which to write data destined for
stdout.
No package name allowed for variable %s in "our"
(F) Fully qualified variable names are not allowed in "our"
declarations, because that doesn't make much sense under existing
semantics. Such syntax is reserved for future extensions.
No Perl script found in input
(F) You called "perl -x", but no line was found in the file
beginning with #! and containing the word "perl".
No setregid available
(F) Configure didn't find anything resembling the setregid() call
for your system.
No setreuid available
(F) Configure didn't find anything resembling the setreuid() call
No such class %s
(F) You provided a class qualifier in a "my", "our" or "state"
declaration, but this class doesn't exist at this point in your
program.
No such hook: %s
(F) You specified a signal hook that was not recognized by Perl.
Currently, Perl accepts "__DIE__" and "__WARN__" as valid signal
hooks
No such pipe open
(P) An error peculiar to VMS. The internal routine my_pclose()
tried to close a pipe which hadn't been opened. This should have
been caught earlier as an attempt to close an unopened filehandle.
No such signal: SIG%s
(W signal) You specified a signal name as a subscript to %SIG that
was not recognized. Say "kill -l" in your shell to see the valid
signal names on your system.
Not a CODE reference
(F) Perl was trying to evaluate a reference to a code value (that
is, a subroutine), but found a reference to something else instead.
You can use the ref() function to find out what kind of ref it
really was. See also perlref.
Not a format reference
(F) I'm not sure how you managed to generate a reference to an
anonymous format, but this indicates you did, and that it didn't
exist.
Not a GLOB reference
(F) Perl was trying to evaluate a reference to a "typeglob" (that
is, a symbol table entry that looks like *foo), but found a
reference to something else instead. You can use the ref()
function to find out what kind of ref it really was. See perlref.
Not a HASH reference
(F) Perl was trying to evaluate a reference to a hash value, but
found a reference to something else instead. You can use the ref()
function to find out what kind of ref it really was. See perlref.
Not an ARRAY reference
(F) Perl was trying to evaluate a reference to an array value, but
found a reference to something else instead. You can use the ref()
function to find out what kind of ref it really was. See perlref.
Not a perl script
(F) The setuid emulator requires that scripts have a well-formed #!
line even on machines that don't support the #! construct. The
line must mention perl.
Not a SCALAR reference
(F) Perl was trying to evaluate a reference to a scalar value, but
Not enough arguments for %s
(F) The function requires more arguments than you specified.
Not enough format arguments
(W syntax) A format specified more picture fields than the next
line supplied. See perlform.
%s: not found
(A) You've accidentally run your script through the Bourne shell
instead of Perl. Check the #! line, or manually feed your script
into Perl yourself..
Non-string passed as bitmask
(W misc) A number has been passed as a bitmask argument to
select(). Use the vec() function to construct the file descriptor
bitmasks for select. See "select" in perlfunc
Null filename used
(F) You can't require the null filename, especially because on many
machines that means the current directory! See "require" in
perlfunc.
NULL OP IN RUN
(P debugging) Some internal routine called run() with a null opcode
pointer.
Null picture in formline
(F) The first argument to formline must be a valid format picture
specification. It was found to be empty, which probably means you
supplied it an uninitialized value. See perlform.
Null realloc
(P) An attempt was made to realloc NULL.
NULL regexp argument
(P) The internal pattern matching routines blew it big time.
NULL regexp parameter
(P) The internal pattern matching routines are out of their gourd.
Number too long
(F) Perl limits the representation of decimal numbers in programs
to about 250 characters. You've exceeded that length. Future
versions of Perl are likely to eliminate this arbitrary limitation.
In the meantime, try using scientific notation (e.g. "1e6" instead
of "1_000_000").
Odd number of arguments for overload::constant
(W overload) The call to overload::constant contained an odd number
of arguments. The arguments should come in pairs.
Odd number of elements in anonymous hash
(W misc) You specified an odd number of elements to initialize a
hash, which is odd, because hashes come in key/value pairs.
Odd number of elements in hash assignment
(W misc) You specified an odd number of elements to initialize a
hash, which is odd, because hashes come in key/value pairs.
Offset outside string
(F, W layer) You tried to do a read/write/send/recv/seek operation
with an offset pointing outside the buffer. This is difficult to
imagine. The sole exceptions to this are that zero padding will
take place when going past the end of the string when either
"sysread()"ing a file, or when seeking past the end of a scalar
opened for I/O (in anticipation of future reads and to imitate the
behaviour with real files).
%s() on unopened %s
(W unopened) An I/O operation was attempted on a filehandle that
was never initialized. You need to do an open(), a sysopen(), or a
socket() call, or call a constructor from the FileHandle package.
-%s on unopened filehandle %s
(W unopened) You tried to invoke a file test operator on a
filehandle that isn't open. Check your control flow. See also
"-X" in perlfunc.
oops: oopsAV
(S internal) An internal warning that the grammar is screwed up.
oops: oopsHV
(S internal) An internal warning that the grammar is screwed up.
Opening dirhandle %s also as a file
(W io deprecated) You used open() to associate a filehandle to a
symbol (glob or scalar) that already holds a dirhandle. Although
legal, this idiom might render your code confusing and is
deprecated.
Opening filehandle %s also as a directory
(W io deprecated) You used opendir() to associate a dirhandle to a
symbol (glob or scalar) that already holds a filehandle. Although
legal, this idiom might render your code confusing and is
deprecated.
Operation "%s": no method found, %s
(F) An attempt was made to perform an overloaded operation for
which no handler was defined. While some handlers can be
autogenerated in terms of other handlers, there is no default
before in the current lexical scope.
Out of memory!
(X) The malloc() function returned 0, indicating there was
insufficient remaining memory (or virtual memory) to satisfy the
request. Perl has no option but to exit immediately..
Out of memory during %s extend
(X) An attempt was made to extend an array, a list, or a string
beyond the largest possible memory allocation.
Out of memory during "large" request for %s
(F) The malloc() function returned 0, indicating there was
insufficient remaining memory (or virtual memory) to satisfy the
request. However, the request was judged large enough (compile-time
default is 64K), so a possibility to shut down by trapping this
error is granted.
Out of memory during request for %s
, and the error
message will include the line and file where the failed request
happened.
Out of memory during ridiculously large request
(F) You can't allocate more than 2^31+"small amount" bytes. This
error is most likely to be caused by a typo in the Perl program.
e.g., $arr[time] instead of $arr[$time].
Out of memory for yacc stack
(F) The yacc parser wanted to grow its stack so it could continue
parsing, but realloc() wouldn't give it more memory, virtual or
otherwise.
'.' outside of string in pack
(F) The argument to a '.' in your template tried to move the
working position to before the start of the packed string being
built.
'@' outside of string in unpack
(F) You had a template that specified an absolute position outside
pack/unpack repeat count overflow
(F) You can't specify a repeat count so large that it overflows
your signed integers. See "pack" in perlfunc.
page overflow
(W io) A single call to write() produced more lines than can fit on
a page. See perlform.
panic: %s
(P) An internal error.
panic: attempt to call %s in %s
(P) One of the file test operators entered a code branch that calls
an ACL related-function, but that function is not available on this
platform. Earlier checks mean that it should not be possible to
enter this branch on this platform.
panic: ck_grep
(P) Failed an internal consistency check trying to compile a grep.
panic: ck_split
(P) Failed an internal consistency check trying to compile a split.
panic: corrupt saved stack index
(P) The savestack was requested to restore more localized values
than there are in the savestack.
panic: del_backref
(P) Failed an internal consistency check while trying to reset a
weak reference.
panic: Devel::DProf inconsistent subroutine return
(P) Devel::DProf called a subroutine that exited using goto(LABEL),
last(LABEL) or next(LABEL). Leaving that way a subroutine called
from an XSUB will lead very probably to a crash of the interpreter.
This is a bug that will hopefully one day get fixed.
panic: die %s
(P) We popped the context stack to an eval context, and then
discovered it wasn't an eval context.
panic: do_subst
(P) The internal pp_subst() routine was called with invalid
operational data.
panic: do_trans_%s
(P) The internal do_trans routines were called with invalid
operational data.
panic: fold_constants JMPENV_PUSH returned %d
(P) While attempting folding constants an exception other than an
"eval" failure was caught.
Most likely the hash contains an object with a reference back to
the hash and a destructor that adds a new object to the hash.
panic: INTERPCASEMOD
(P) The lexer got into a bad state at a case modifier.
panic: INTERPCONCAT
(P) The lexer got into a bad state parsing a string with brackets.
panic: kid popen errno read
(F) forked child returned an incomprehensible message about its
errno.
panic: last
(P) We popped the context stack to a block context, and then
discovered it wasn't a block context.
panic: leave_scope clearsv
(P) A writable lexical variable became read-only somehow within the
scope.
panic: leave_scope inconsistency
(P) The savestack probably got out of sync. At least, there was an
invalid enum on the top of it.
panic: magic_killbackrefs
(P) Failed an internal consistency check while trying to reset all
weak references to an object.
panic: malloc
(P) Something requested a negative number of bytes of malloc.
panic: memory wrap
(P) Something tried to allocate more memory than possible.
panic: pad_alloc
(P) The compiler got confused about which scratch pad it was
allocating and freeing temporaries and lexicals from.
panic: pad_free curpad
(P) The compiler got confused about which scratch pad it was
allocating and freeing temporaries and lexicals from.
panic: pad_free po
(P) An invalid scratch pad offset was detected internally.
panic: pad_reset curpad
(P) The compiler got confused about which scratch pad it was
allocating and freeing temporaries and lexicals from.
panic: pad_sv po
(P) An invalid scratch pad offset was detected internally.
panic: pad_swipe curpad
panic: pp_split
(P) Something terrible went wrong in setting up for the split.
panic: realloc
(P) Something requested a negative number of bytes of realloc.
panic: restartop
(P) Some internal routine requested a goto (or something like it),
and didn't supply the destination.
panic: return
(P) We popped the context stack to a subroutine or eval context,
and then discovered it wasn't a subroutine or eval context.
panic: scan_num
(P) scan_num() got called on something that wasn't a number.
panic: sv_chop %s
(P) The sv_chop() routine was passed a position that is not within
the scalar's string buffer.
panic: sv_insert
(P) The sv_insert() routine was told to remove more string than
there was string.
panic: top_env
(P) The compiler attempted to do a goto, or something weird like
that.
panic: unimplemented op %s (#%d) called
(P) The compiler is screwed up and attempted to use an op that
isn't permitted at run time.
panic: utf16_to_utf8: odd bytelen
(P) Something tried to call utf16_to_utf8 with an odd (as opposed
to even) byte length.
panic: yylex
(P) The lexer got into a bad state while processing a case
modifier.
Pattern.
Parentheses missing around "%s" list
(W parenthesis) You said something like
you've redirected it with select().)
(perhaps you forgot to load "%s"?)
(F) This is an educated guess made in conjunction with the message
"Can't locate object method \"%s\" via package \"%s\"". It often
means that a method requires a package that has not been loaded.
Perl_my_%s() not available
(F) Your platform has very uncommon byte-order and integer size, so
it was not possible to set up some or all fixed-width byte-order
conversion functions. This is only a problem when you're using the
'<' or '>' modifiers in (un)pack templates. See "pack" in
perlfunc.
Perl %s required--this is only version %s, stopped
(F) The module in question uses features of a version of Perl more
recent than the currently running version. How long has it been
since you upgraded, anyway? See "require" in perlfunc.
PERL_SH_DIR too long
(F) An error peculiar to OS/2. PERL_SH_DIR is the directory to find
the "sh"-shell in. See "PERL_SH_DIR" in perlos2.
PERL_SIGNALS illegal: "%s"
See "PERL_SIGNALS" in perlrun for legal values..
Permission denied
(F) The setuid emulator in suidperl decided you were up to no good.
pid %x not a child
(W exec) A warning peculiar to VMS. Waitpid() was asked to wait
for a process which isn't a subprocess of the current process.
While this is fine from VMS' perspective, it's probably not what
was discovered. Note that the POSIX character classes do not have
the "is" prefix the corresponding C interfaces have: in other
words, it's "[[:print:]]", not "isprint". See perlre.
POSIX getpgrp can't take an argument
(F) Your system has POSIX getpgrp(), which takes no argument,
unlike the BSD version, which takes a pid.
POSIX syntax [%s] belongs inside character classes in regex; marked by
<-- HERE in m/%s/
(W regexp) The character class constructs [: :], [= =], and [. .]
go inside character classes, the [] are part of the construct, for
example: /[012[:alpha:]345]/. Note that [= =] and [. .] are not
currently implemented; they are simply placeholders for future
extensions and will cause fatal errors. The <-- HERE shows in the
regular expression about where the problem was discovered. See
perlre.
POSIX syntax [. .] is reserved for future extensions in regex; marked
by <-- HERE in m/%s/
(F regexp).
POSIX syntax [= =] is reserved for future extensions in regex; marked
by <-- HERE in m/%s/
(F).
Possible attempt to put comments in qw() list
(W qw):
Possible attempt to separate words with commas
(W qw) !;
Possible memory corruption: %s overflowed 3rd argument
(F)" in perlfunc.
Possible precedence problem on bitwise %c operator
(W precedence) Your program uses a bitwise logical operator in
conjunction with a numeric comparison operator, like this :
if ($x & $y == 0) { ... }
This expression is actually equivalent to "$x & ($y == 0)", due to
the higher precedence of "==". This is probably not what you want.
(If you really meant to write this, disable the warning, or,
better, put the parentheses explicitly and write "$x & ($y == 0)").
Possible unintended interpolation of %s in string
(W ambiguous) You said something like `@foo' in a double-quoted
string but there was no array @foo in scope at the time. If you
wanted a literal @foo, then write it as \@foo; otherwise find out
what happened to the array you apparently lost track of.
pragma "attrs" is deprecated, use "sub NAME : ATTRS" instead
(D deprecated) You have written something like this:
sub doit
{
use attrs qw(locked);
}
You should use the new declaration syntax instead.
sub doit : locked
{
...
because of the strict regularization of Perl 5's grammar into unary
and list operators. (The old open was a little of both.) You must
put parentheses around the filehandle, or use the new "or" operator
instead of "||".
Premature end of script headers
See Server error.
printf() on closed filehandle %s
(W closed) The filehandle you're writing to got itself closed
sometime before now. Check your control flow.
print() on closed filehandle %s
(W closed) The filehandle you're printing on got itself closed
sometime before now. Check your control flow.
Process terminated by SIG%s
perlos2.
Prototype mismatch: %s vs %s
(S prototype) The subroutine being declared or defined had
previously been declared or defined with a different function
prototype.
Prototype not terminated
(F) You've omitted the closing parenthesis in a function prototype
definition.
Quantifier follows nothing in regex; marked by <-- HERE in m/%s/
(F) You started a regular expression with a quantifier. Backslash
it if you meant it literally. The <-- HERE shows in the regular
expression about where the problem was discovered. See perlre.
Quantifier in {,} bigger than %d in regex; marked by <-- HERE in m/%s/
(F) There is currently a limit to the size of the min and max
values of the {min,max} construct. The <-- HERE shows in the
regular expression about where the problem was discovered. See
perlre.
Quantifier unexpected on zero-length expression; marked by <-- HERE in
m/%s/
}/".
The <-- HERE shows in the regular expression about where the
problem was discovered.
(W closed) The filehandle you're reading from got itself closed
sometime before now. Check your control flow.
read() on closed filehandle %s
(W closed) You tried to read from a closed filehandle.
read() on unopened filehandle %s
(W unopened) You tried to read from a filehandle that was never
opened.
Reallocation too large: %lx
(F) You can't allocate more than 64K on an MS-DOS machine.
realloc() of freed memory ignored
(S malloc) An internal routine called realloc() on something that
had already been freed.
Recompile perl with -DDEBUGGING to use -D switch
(F debugging) You can't use the -D option unless the code to
produce the desired output is compiled into Perl, which entails
some overhead, which is why it's currently left out of your copy.
Recursive inheritance detected in package '%s'
(F) While calculating the method resolution order (MRO) of a
package, Perl believes it found an infinite loop in the @ISA
hierarchy. This is a crude check that bails out after 100 levels
of @ISA depth.
Recursive inheritance detected while looking for method %s
(F) More than 100 levels of inheritance were encountered while
invoking a method. Probably indicates an unintended loop in your
inheritance hierarchy.
Reference found where even-sized list expected
(W misc)
Reference is already weak
(W misc) You have attempted to weaken a reference that is already
weak. Doing so has no effect.
Reference miscount in sv_replace()
(W internal) The internal sv_replace() function was handed a new SV
with a reference count of other than 1.
Reference to invalid group 0
(F) You used "\g0" or similar in a regular expression. You may
problem was discovered.
Reference to nonexistent or unclosed group in regex; marked by <-- HERE
in m/%s/
(F) You used something like "\g{-7}" in your regular expression,
but there are not at least seven sets of closed capturing
parentheses in the expression before where the "\g{-7}" was
located.
The <-- HERE shows in the regular expression about where the
problem was discovered.
Reference.
(?(DEFINE)....) does not allow branches in regex; marked by <-- HERE in
m/%s/
(F) You used something like "(?(DEFINE)...|..)" which is illegal.
The most likely cause of this error is that you left out a
parenthesis inside of the "...." part.
The <-- HERE shows in the regular expression about where the
problem was discovered.
regexp memory corruption
(P) The regular expression engine got confused by what the regular
expression compiler gave it.
Regexp out of space
(P) A "can't happen" error, because safemalloc() should have caught
it earlier.
Repeated format line will never terminate (~~ and @# incompatible)
(F) Your format contains the ~~ repeat-until-blank sequence and a
numeric field that will never go blank so that the repetition never
terminates. You might use ^# instead. See perlform.
Reversed %s= operator
(W syntax) You wrote your assignment operator backwards. The =
must always comes last, to avoid ambiguity with subsequent unary
operators.
rewinddir() attempted on invalid dirhandle %s
(W io) The dirhandle you tried to do a rewinddir() on is either
closed or not really a dirhandle. Check your control flow.
bad, especially if the Perl program is intended to be long-running.
Scalar value @%s[%s] better written as $%s[.
Scalar value @%s{%s} better written as $%s{%s}
(W syntax) perlref.
Script is not setuid/setgid in suidperl
(F) Oddly, the suidperl program was invoked on a script without a
setuid or setgid bit set. This doesn't make much sense.
Search pattern not terminated
(F) The lexer couldn't find the final delimiter of a // or m{}
construct. Remember that bracketing delimiters count nesting
level. Missing the leading "$" from a variable $m may cause this
error..
Search pattern not terminated or ternary operator parsed as search
pattern
(F) The lexer couldn't find the final delimiter of a "?PATTERN?"
construct.
The question mark is also used as part of the ternary operator (as
in "foo ? 0 : 1") leading to some ambiguous constructions being
wrongly parsed. One way to disambiguate the parsing is to put
(F) This machine doesn't implement the select() system call.
Self-ties of arrays and hashes are not supported
(F) Self-ties are of arrays and hashes are not supported in the
current implementation.
Semicolon seems to be missing
(W semicolon) A nearby syntax error was probably caused by a
missing semicolon, or possibly some other missing operator, such as
a comma.
semi-panic: attempt to dup freed string
(S internal) The internal newSVsv() routine was called to duplicate
a scalar that had previously been marked as free.
sem%s not implemented
(F) You don't have System V semaphore IPC on your system.
send() on closed socket %s
(W closed) The socket you're sending to got itself closed sometime
before now. Check your control flow.
Sequence (? incomplete in regex; marked by <-- HERE in m/%s/
(F) A regular expression ended with an incomplete extension (?. The
<-- HERE shows in the regular expression about where the problem
was discovered. See perlre.
Sequence (?%s...) not implemented in regex; marked by <-- HERE in m/%s/
(F) A proposed regular expression extension has the character
reserved but has not yet been written. The <-- HERE shows in the
regular expression about where the problem was discovered. See
perlre.
Sequence (?%s...) not recognized in regex; marked by <-- HERE in m/%s/
(F) You used a regular expression extension that doesn't make
sense. The <-- HERE shows in the regular expression about where
the problem was discovered. See perlre.
Sequence \\%s... not terminated in regex; marked by <-- HERE in m/%s/
(F) The regular expression expects a mandatory argument following
the escape sequence and this has been omitted or incorrectly
written.
Sequence (?#... not terminated in regex; marked by <-- HERE in m/%s/
(F) A regular expression comment must be terminated by a closing
parenthesis. Embedded parentheses aren't allowed. The <-- HERE
shows in the regular expression about where the problem was
discovered. See perlre.
Sequence (?{...}) not terminated or not {}-balanced in regex; marked by
<-- HERE in m/%s/
(F) If the contents of a (?{...}) clause contains braces, they must
balance for Perl to properly detect the end of the clause. The <--
HERE shows in the regular expression about where the problem was:
You should also look at perlfaq9.
setegid() not implemented
(F) You tried to assign to $), and your operating system doesn't
support the setegid() system call (or equivalent), or at least
Configure didn't think so.
seteuid() not implemented
(F) You tried to assign to $>, and your operating system doesn't
support the seteuid() system call (or equivalent), or at least
Configure didn't think so.
setpgrp can't take arguments
(F) Your system has the setpgrp() from BSD 4.2, which takes no
arguments, unlike POSIX setpgid(), which takes a process ID and
process group ID.
setrgid() not implemented
(F) You tried to assign to $(, and your operating system doesn't
support the setrgid() system call (or equivalent), or at least
Configure didn't think so.
setruid() not implemented
(F) You tried to assign to $<, and your operating system doesn't
support the setruid() system call (or equivalent), or at least
Configure didn't think so.
setsockopt() on closed socket %s
(W closed) You tried to set a socket option on a closed socket.
Did you forget to check the return value of your socket() call?
See "setsockopt" in perlfunc.
Setuid/gid script is writable by world
(F) The setuid emulator won't run a script that is writable by the
world, because the world might have written on it already.
Setuid script not plain file
(F) The setuid emulator won't run a script that isn't read from a
file, but from a socket, a pipe or another device.
/%s/ should probably be written as "%s"
(W syntax) You have used a pattern where Perl expected to find a
string, as in the first argument to "join". Perl will treat the
true or false result of matching the pattern against $_ as the
string, which is probably not what you had in mind.
shutdown() on closed socket %s
(W closed) You tried to do a shutdown on a closed socket. Seems a
bit superfluous.
SIG%s handler "%s" not defined
(W signal) The signal handler named in %SIG doesn't, in fact,
exist. Perhaps you put it into the wrong package?
Smart matching a non-overloaded object breaks encapsulation
(F) You should not use the "~~" operator on an object that does not
overload it: Perl refuses to use the object's underlying structure
for the smart match.
sort is now a reserved word
(F) An ancient error message that almost nobody ever runs into
anymore. But before sort was a keyword, people sometimes used it
as a filehandle.
Sort subroutine didn't return a numeric value
(F) A sort comparison routine must return a number. You probably
blew it by not using "<=>" or "cmp", or by not using them
correctly. See "sort" in perlfunc.
Sort subroutine didn't return single value
(F) A sort comparison subroutine may not return a list value with
more or less than one element. See "sort" in perlfunc.
splice() offset past end of array
(W misc) You attempted to specify an offset that was past the end
of the array passed to splice(). Splicing will instead commence at
the end of the array, rather than past it. If this isn't what you
want, try explicitly pre-extending the array by assigning $#array =
$offset. See "splice" in perlfunc.
Split loop
(P) The split was looping infinitely. (Obviously, a split
shouldn't iterate more times than there are characters of input,
which is what happened.) See "split" in perlfunc.
Statement unlikely to be reached
(W exec) You did an exec() with some statement after it other than
a die(). This is almost always an error, because exec() never
returns unless there was a failure. You probably wanted to use
system() instead, which does return. To suppress this warning, put
the exec() in a block by itself.
stat() on unopened filehandle %s
(W unopened) You tried to use the stat() function on a filehandle
no warnings 'redefine';
eval "sub name { ... }";
}
Substitution loop
(P) The substitution was looping infinitely. (Obviously, a
substitution shouldn't iterate more times than there are characters
of input, which is what happened.) See the discussion of
substitution in "Regexp Quote-Like Operators" in perlop.
Substitution pattern not terminated
(F) The lexer couldn't find the interior delimiter of an s/// or
s{}{} construct. Remember that bracketing delimiters count nesting
level. Missing the leading "$" from variable $s may cause this
error.
Substitution replacement not terminated
(F) The lexer couldn't find the final delimiter of an s/// or s{}{}
construct. Remember that bracketing delimiters count nesting
level. Missing the leading "$" from variable $s may cause this
error.
substr outside of string
(W substr),(F) You tried to reference a substr() that pointed
outside of a string. That is, the absolute value of the offset was
larger than the length of the string. See "substr" in perlfunc.
This warning is fatal if substr is used in an lvalue context (as
the left hand side of an assignment or as a subroutine argument for
example).
suidperl is no longer needed since %s
(F) Your Perl was compiled with -DSETUID_SCRIPTS_ARE_SECURE_NOW,
but a version of the setuid emulator somehow got run anyway.
sv_upgrade from type %d down to type %d
(P) Perl tried to force the upgrade an SV to a type which was
actually inferior to its current type.
Switch (?(condition)... contains too many branches in regex; marked by
<-- HERE in m/%s/
(F) A (?(condition)if-clause|else-clause) construct can have at
most two branches (the if-clause and the else-clause). If you want
one or both to contain alternation, such as using
"this|that|other", enclose it in clustering parentheses:
(?(condition)(?:this|that|other)|else-clause)
The <-- HERE shows in the regular expression about where the
problem was discovered. See perlre.
Switch condition not recognized in regex; marked by <-- HERE in m/%s/
(F) If the argument to the (?(...)if-clause|else-clause) construct
is a number, it can be only a number. The <-- HERE shows in the
regular expression about where the problem was discovered. See.
syntax error at line %d: `%s' unexpected
(A) You've accidentally run your script through the Bourne shell
instead of Perl. Check the #! line, or manually feed your script
into Perl yourself.
syntax error in file %s at line %d, next 2 tokens "%s"
(F) This error is likely to occur if you run a perl5 script through
a perl4 interpreter, especially if the next 2 tokens are "use
strict" or "my $var" or "our $var".
sysread() on closed filehandle %s
(W closed) You tried to read from a closed filehandle.
sysread() on unopened filehandle %s
(W unopened) You tried to read from a filehandle that was never
opened.
System V %s is not implemented on this machine
(F) You tried to do something with a function beginning with "sem",
"shm", or "msg" but that System V IPC is not implemented in your
machine. In some machines the functionality can exist but be
unconfigured. Consult your system support.
syswrite() on closed filehandle %s
(W closed) The filehandle you're writing to got itself closed
sometime before now. Check your control flow.
"-T" and "-B" not implemented on filehandles
(F) Perl can't peek at the stdio buffer of filehandles when it
doesn't know about your kind of stdio. You'll have to use a
filename instead.
Target of goto is too deeply nested
(F) You tried to use "goto" to reach a label that was too deeply
nested for Perl to reach. Perl is doing you a favor by refusing.
$[ = 0;
$[ = 1;
...
local $[ = 0;
local $[ = 1;
...
This is to prevent the problem of one module changing the array
base out from under another module inadvertently. See "$[" in
perlvar.
The crypt() function is unimplemented due to excessive paranoia
(F) Configure couldn't find the crypt() function on your machine,
probably because your vendor didn't supply it, probably because
they think the U.S. Government thinks it's a secret, or at least
that they will continue to pretend that it is. And if you quote me
on that, I will deny it.
The %s function is unimplemented
The function indicated isn't implemented on this architecture,
according to the probings of Configure.
The stat preceding %s wasn't an lstat
(F) It makes no sense to test the current stat buffer for symbolic
linkhood if the last stat that wrote to the stat buffer already
went past the symlink to get to the real file. Use an actual
filename instead.
The 'unique' attribute may only be applied to 'our' variables
(F) This attribute was never supported on "my" or "sub"
declarations.
This Perl can't reset CRTL environ elements (%s)
This Perl can't set CRTL environ elements (%s=%s)
.
thread failed to start: %s
(W threads)(S) The entry point function of threads->create() failed
for some reason.
times not implemented
(F) Your version of the C library apparently doesn't do times(). I
suspect you're not running on Unix.
"-T" is on the #! line, it must also be used on the command line
(X) The #! line (or local equivalent) in a Perl script contains the
-T option, but Perl was not invoked with -T in its command line.
This is an error because, by the time Perl discovers a -T in a
script, it's too late to properly taint everything from the
uc(), or ucfirst() (or their string-inlined versions), but you
specified an illegal mapping. See "User-Defined Character
Properties" in perlunicode.
Too deeply nested ()-groups
(F) Your template contains ()-groups with a ridiculously deep
nesting level.
Too few args to syscall
(F) There has to be at least one argument to syscall() to specify
the system call to call, silly dilly.
Too late for "-%s" option
(X) The #! line (or local equivalent) in a Perl script contains the
-M, -m or -C option..
Too late to run %s block
.
Too many args to syscall
(F) Perl supports a maximum of only 14 args to syscall().
Too many arguments for %s
(F) The function requires fewer arguments than you specified.
Too many )'s
(A) You've accidentally run your script through csh instead of
Perl. Check the #! line, or manually feed your script into Perl
yourself.
Too many ('s
(A) You've accidentally run your script through csh instead of
Perl. Check the #! line, or manually feed your script into Perl
yourself.
Trailing \ in regex m/%s/
(F) The regular expression ends with an unbackslashed backslash.
Backslash it. See perlre.
Transliteration pattern not terminated
truncate not implemented
(F) Your machine doesn't implement a file truncation mechanism that
Configure knows about.
Type of arg %d to %s must be %s (not %s)
(F) This function requires the argument in that position to be of a
certain type. Arrays must be @NAME or "@{EXPR}". Hashes must be
%NAME or "%{EXPR}". No implicit dereferencing is allowed--use the
{EXPR} forms as an explicit dereference. See perlref.
umask not implemented
(F) Your machine doesn't implement the umask function and you tried
to use it to restrict permissions for yourself (EXPR & 0700).
Unable to create sub named "%s"
(F) You attempted to create or access a subroutine with an illegal
name.
Unbalanced context: %d more PUSHes than POPs
(W internal) The exit code detected an internal inconsistency in
how many execution contexts were entered and left.
Unbalanced saves: %d more saves than restores
(W internal) The exit code detected an internal inconsistency in
how many values were temporarily localized.
Unbalanced scopes: %d more ENTERs than LEAVEs
(W internal) The exit code detected an internal inconsistency in
how many blocks were entered and left.
Unbalanced tmps: %d more allocs than frees
(W internal) The exit code detected an internal inconsistency in
how many mortal scalars were allocated and freed.
Undefined format "%s" called
(F) The format indicated doesn't seem to exist. Perhaps it's
really in another package? See perlform.
Undefined sort subroutine "%s" called
(F) The sort comparison routine specified doesn't seem to exist.
Perhaps it's in a different package? See "sort" in perlfunc.
Undefined subroutine &%s called
(F) The subroutine indicated hasn't been defined, or if it was, it
has since been undefined.
Undefined subroutine called
(F) The anonymous subroutine you're trying to call hasn't been
defined, or if it was, it has since been undefined.
Undefined subroutine in sort
(F) The sort comparison routine specified is declared but doesn't
seem to have been defined yet. See "sort" in perlfunc.
yourself.
unexec of %s into %s failed!
(F) The unexec() routine failed for some reason. See your local
FSF representative, who probably put it there in the first place.
Unicode character %s is illegal
(W utf8) Certain Unicode characters have been designated off-limits
by the Unicode standard and should not be generated. If you really
know what you are doing you can turn off this warning by "no
warnings 'utf8';".
Unknown BYTEORDER
(F) There are no byte-swapping functions for a machine with this
byte order.
Unknown open() mode '%s'
(F) The second argument of 3-argument open() is not among the list
of valid modes: "<", ">", ">>", "+<", "+>", "+>>", "-|", "|-",
"<&", ">&".
Unknown PerlIO layer "%s"
(W layer) An attempt was made to push an unknown layer onto the
Perl I/O system. (Layers take care of transforming data between
external and internal representations.) Note that some layers,
such as "mmap", are not supported in all environments. If your
program didn't explicitly request the failing operation, it may be
the result of the value of the environment variable PERLIO..
Unknown "re" subpragma '%s' (known ones are: %s)
You tried to use an unknown subpragma of the "re" pragma.
Unknown perlre.
Unknown Unicode option letter '%c'
You specified an unknown Unicode option. See perlrun documentation
of the "-C" switch for the list of known options.
Unknown Unicode option value %x
(F) You either made a typo or have incorrectly put a "*" quantifier
after an open brace in your pattern. Check the pattern and review
perlre for details on legal verb patterns.
first.
unmatched [ in regex; marked by <-- HERE in m/%s/
(F) The brackets around a character class must match. If you wish
to include a closing bracket in a character class, backslash it or
put it first. The <-- HERE shows in the regular expression about
where the problem was discovered. See perlre.
unmatched ( in regex; marked by <-- HERE in m/%s/
(F) Unbackslashed parentheses must always be balanced in regular
expressions. If you're a vi user, the % key is valuable for finding
the matching parenthesis. The <-- HERE shows in the regular
expression about where the problem was discovered. See perlre.
Unmatched right %s bracket
(F) The lexer counted more closing curly or square brackets than
opening ones, so you're probably missing a matching opening
bracket. As a general rule, you'll find the missing one (so to
speak) near the place you were last editing.
Unquoted string "%s" may clash with future reserved word
(W reserved) You used a bareword that might someday be claimed as a
reserved word. It's best to put such a word in quotes, or
capitalize it somehow, or insert an underbar into it. You might
also declare it as a subroutine.
Unrecognized character %s in column %d
(F) The Perl parser has no idea what to do with the specified
character in your Perl script (or eval) at the specified column.
Perhaps you tried to run a compressed script, a binary program, or
a directory as a Perl program.
Unrecognized escape \\%c in character class passed through in regex;
marked by <-- HERE in m/%s/
(W regexp) You used a backslash-character combination which is not
recognized by Perl inside character classes. The character was
understood literally. The <-- HERE shows in the regular expression
about where the escape was discovered.
Unrecognized escape \\%c passed through
(W misc) You used a backslash-character combination which is not
recognized by Perl. The character was understood literally.
Unrecognized escape \\%c passed through in regex; marked by <-- HERE in
m/%s/
(W regexp) You used a backslash-character combination which is not
recognized by Perl. The character was understood literally. The
<-- HERE shows in the regular expression about where the escape was
discovered.
operation failed, PROBABLY because the filename contained a
newline, PROBABLY because you forgot to chomp() it off. See
"chomp" in perlfunc.
Unsupported directory function "%s" called
(F) Your machine doesn't support opendir() and readdir().
Unsupported function %s
(F) This machine doesn't implement the indicated function,
apparently. At least, Configure doesn't think so..
Unsupported script encoding %s
(F) Your program file begins with a Unicode Byte Order Mark (BOM)
which declares it to be in a Unicode encoding that Perl cannot
read.
Unsupported socket function "%s" called
(F) Your machine doesn't support the Berkeley socket mechanism, or
at least that's what Configure thought. compressed integer
(F) An argument to unpack("w",...) was incompatible with the BER
compressed integer format and could not be converted to an integer.
See "pack" in perlfunc.
Unterminated verb pattern in regex; marked by <-- HERE in m/%s/
(F) You used a pattern of the form "(*VERB)" but did not terminate
the pattern with a ")". Fix the pattern and retry.
Unterminated verb pattern argument in regex; marked by <-- HERE in
m/%s/
(F) You used a pattern of the form "(*VERB:ARG)" but did not
terminate the pattern with a ")". Fix the pattern and retry.
untie attempted while %d inner references still exist
(W untie) A copy of the object returned from "tie" (or "tied") was
still valid when "untie" was called.
Usage: POSIX::%s(%s)
(F) You called a POSIX function with incorrect arguments. See
"FUNCTIONS" in POSIX for more information.
Usage: Win32::%s(%s)
(F) You called a Win32 function with incorrect arguments. See
Win32 for more information.
Useless (?-%s) - don't use /%s modifier in regex; marked by <-- HERE in
m/%s/
(W regexp) You have used an internal modifier such as (?-o) that
has no meaning unless removed from the entire regexp:
if ($string =~ /(?-o)$pattern/o) { ... }
must be written as
if ($string =~ /$pattern/) { ... }
The <-- HERE shows in the regular expression about where the
problem was discovered. See perlre.
Useless localization of %s
(W syntax) The localization of lvalues such as "local($x=10)" is
legal, but in fact the local() currently has no effect. This may
change at some point in the future, but in the meantime such code
is discouraged.
Useless (?%s) - use /%s modifier in regex; marked by <-- HERE in m/%s/
(W regexp) You have used an internal modifier such as (?o) that has
no meaning unless applied to the entire regexp:
if ($string =~ /(?o)$pattern/) { ... }
must be written as
if ($string =~ /$pattern/o) { ... }
The <-- HERE shows in the regular expression about where the
problem was discovered. See perlre.
Useless use of %s in void context
.
This warning will not be issued for numerical constants equal to 0
or 1 since they are often used in statements like
1 while sub_with_side_effects();
String constants that would normally evaluate to 0 or 1 are warned
about.
Useless use of "re" pragma
(W) You did "use re;" without any arguments. That isn't very
useful.
Useless use of sort in scalar context
(W void) You used sort in scalar context, as in :
my $x = sort @y;
This is not very useful, and perl currently optimizes this away.
Useless use of %s with no values
(W syntax) You used the push() or unshift() function with no
arguments apart from the array, like "push(@x)" or "unshift(@foo)".
That won't usually have any effect on the array, so is completely
useless. It's possible in principle that push(@tied_array) could
have some effect if the array is tied to a class which implements a
PUSH method. If so, you can write it as "push(@tied_array,())" to
avoid this warning.
"use" not allowed in expression
(F) The "use" keyword is recognized and executed at compile time,
and returns no useful value. See perlmod.
Use of bare << to mean <<"" is deprecated
(D deprecated, W syntax) You are now encouraged to use the
explicitly quoted form if you wish to use an empty line as the
terminator of the here-document.
Use of comma-less variable list is deprecated
(D deprecated, W syntax) The values you give to a format should be
separated by commas, not just aligned on a line.
Use of chdir('') or chdir(undef) as chdir() deprecated
(D deprecated) chdir() with no arguments is documented to change to
$ENV{HOME} or $ENV{LOGDIR}. chdir(undef) and chdir('') share this
didn't use the /g modifier. Currently, /c is meaningful only when
/g is used. (This may change in the future.)
Use of freed value in iteration
(F) Perhaps you modified the iterated array within the loop? This
error.
Use of *glob{FILEHANDLE} is deprecated
(D deprecated) You are now encouraged to use the shorter *glob{IO}
form to access the filehandle slot within a typeglob.
Use of /g modifier is meaningless in split
(W regexp) You used the /g modifier on the pattern for a "split"
operator. Since "split" always tries to match the pattern
repeatedly, the "/g" has no effect.).
Use of inherited AUTOLOAD for non-method %s() is deprecated
. "use AutoLoader 'AUTOLOAD';".
the file it already went past any symlink you are presumably trying
to look for. The operation returned "undef". Use a filename
instead.
Use of "package" with no arguments is deprecated
(D deprecated) You used the "package" keyword without specifying a
package name. So no namespace is current at all. Using this can
cause many otherwise reasonable constructs to fail in baffling
ways. "use strict;" instead.
Use of reference "%s" as array index
(W misc) You tried to use a reference as an array index; this
probably isn't what you mean, because references in numerical
context tend to be huge numbers, and so usually indicates
programmer error.
If you really do mean it, explicitly numify your reference, like
so: $array[0+$ref]. This warning is not given for overloaded
objects, either, because you can overload the numification and
stringification operators and then you assumably know what you are
doing.
Use of reserved word "%s" is deprecated
(D deprecated)()".
Use of tainted arguments in %s is deprecated
(W taint, deprecated) You have supplied "system()" or "exec()" with
multiple arguments and at least one of them is tainted. This used
to be allowed but will become a fatal error in a future version of
perl. Untaint your arguments. See perlsec.
Use of uninitialized value.
Using a hash as a reference is deprecated
(W utf8) You tried to generate half of an UTF-16 surrogate by
requesting a Unicode character between the code points 0xD800 and
0xDFFF (inclusive). That range is reserved exclusively for the use
of UTF-16 encoding (by having two 16-bit UCS-2 characters); but
Perl encodes its characters in UTF-8, so what you got is a very
illegal character. If you really know what you are doing you can
turn off this warning by "no warnings 'utf8';".
Value of %s can be "0"; test with defined()
(W misc).
Value of CLI symbol "%s" too long
(W misc) A warning peculiar to VMS. Perl tried to read the value
of an %ENV element from a CLI symbol table, and found a resultant
string longer than 1024 characters. The return value has been
truncated to 1024 characters.
Variable "%s" is not available
(W closure) During compilation, an inner named subroutine or eval
is attempting to capture an outer lexical that is not currently
available. This can happen for one of two reasons. First, the
outer lexical may be declared in an outer anonymous subroutine that
has not yet been created. (Remember that named subs are created at
compile time, while anonymous subs are created at run-time.) For
example,.
Variable "%s" is not imported%s
(F) While "use strict" in effect, you referred to a global variable
to the previous instance. This is almost always a typographical
error. Note that the earlier variable will still exist until the
end of the scope or until all closure referents to it are
destroyed.
Variable syntax
(A) You've accidentally run your script through csh instead of
Perl. Check the #! line, or manually feed your script into Perl
yourself.
Variable "%s" will not stay shared
(W closure) An inner (nested) named subroutine is referencing a
lexical variable defined in an outer named subroutine..
Verb pattern '%s' has a mandatory argument in regex; marked by <-- HERE
in m/%s/
(F) You used a verb pattern that requires an argument. Supply an
argument or check that you are using the right verb.
Verb pattern '%s' may not have an argument in regex; marked by <-- HERE
in m/%s/
(F) You used a verb pattern that is not allowed an argument. Remove
the argument or check that you are using the right verb.
Version number must be a constant number
(P) The attempt to translate a "use Module n.n LIST" statement into
its equivalent "BEGIN" block found an internal inconsistency with
the version number.
Version string '%s' contains invalid data; ignoring: '%s'
(W misc) The version string contains invalid characters at the end,
which are being ignored.
Warning: something's wrong
(W) You passed warn() an empty string (the equivalent of "warn """)
or you called it with no args and $@ was empty.
Warning: unable to close filehandle %s properly
(S) The implicit close() done by an open() got an error indication
on the close(). This usually indicates your file system ran out of
disk space.
but in actual fact, you got
rand(+5);
So put in parentheses to say what you really mean.
Wide character in .
Within []-length '%c' not allowed
(F) The count in the (un)pack template may be replaced by
"[TEMPLATE]" only if "TEMPLATE" always matches the same amount of
packed bytes that can be determined from the template alone. This
is not possible if it contains an of the codes @, /, U, u, w or a
*-length. Redesign the template.
write() on closed filehandle %s
(W closed) The filehandle you're writing to got itself closed
sometime before now. Check your control flow.
%s "\x%s" does not map to Unicode
When reading in different encodings Perl tries to map everything
into Unicode characters. The bytes you read in are not legal in
this encoding, for example
utf8 "\xE4" does not map to Unicode
if you try to read in the a-diaereses Latin-1 as UTF-8.
'X' outside of string
(F) You had a (un)pack template that specified a relative position
before the beginning of the string being (un)packed. See "pack" in
perlfunc.
'x' outside of string in unpack
(F) You had a pack template that specified a relative position
after the end of the string being unpacked. See "pack" in
perlfunc.
YOU HAVEN'T DISABLED SET-ID SCRIPTS IN THE KERNEL YET!
(F) And you probably never will, because you probably don't have
the sources to your kernel, and your vendor probably doesn't give a
rip about what you want. Your best bet is to put a setuid C
wrapper around your script.
You need to quote "%s"
(W syntax) You assigned a bareword as a signal handler name. | http://www.linux-directory.com/man1/perldiag.shtml | crawl-003 | refinedweb | 17,585 | 63.09 |
You can watch my Youtube video (link) with the same content as this blog. Anyway, enjoy.
Introduction
Let's learn bitwise operations that are useful in Competitive Programming. Prerequisite is knowing the binary system. For example, the following must be clear for you already.
Keep in mind that we can pad a number with leading zeros to get the length equal to the size of our type size. For example,
char has $$$8$$$ bits and
int has $$$32$$$.
Bitwise AND, OR, XOR
You likely already know basic logical operations like AND and OR. Using
if(condition1 && condition2) checks if both conditions are true, while OR (
c1 || c2) requires at least one condition to be true.
Same can be done bit-per-bit with whole numbers, and it's called bitwise operations. You must know bitwise AND, OR and XOR, typed respectively as
& | ^, each with just a single character. XOR of two bits is $$$1$$$ when exactly one of those two bits is $$$1$$$ (so, XOR corresponds to
!= operator on bits). There's also NOT but you won't use it often. Everything is explained in Wikipedia but here's an example for bitwise AND. It shows that
53 & 28 is equal to $$$20$$$.
53 = 110101 28 = 11100 110101 & 11100 // imagine padding a shorter number with leading zeros to get the same length ------- 010100 = 20
#include <bits/stdc++.h> using namespace std; string to_binary(int x) { string s; while(x > 0) { s += (x % 2 ? '1' : '0'); x /= 2; } reverse(s.begin(), s.end()); return s; } int main() { cout << "13 = " << to_binary(13) << endl; // 1101 int x = 53; int y = 28; cout << "x = " << to_binary(x) << ", y = " << to_binary(y) << endl; cout << "AND, OR, XOR:" << endl; cout << to_binary(x & y) << " " << to_binary(x | y) << " " << to_binary(x ^ y) << endl; }
In comments, Rezwan.Arefin01 suggested a simpler way to print a number in binary format.
cout << bitset<8>(x); prints a number after converting it into a bitset, which can be printed. There will be more info about bitsets in part 2.
Shifts
There are also bitwise shifts
<< and
>>, not having anything to do with operators used with
cin and
cout.
As the arrows suggest, the left shift
<< shifts bits to the left, increasing the value of the number. Here's what happens with
13 << 2 — a number $$$13$$$ shifted by $$$2$$$ to the left.
LEFT SHIFT RIGHT SHIFT 13 = 1101 13 = 1101 (13 << 2) = 110100 (13 >> 2) = 11
If there is no overflow, an expression
x << b is equal to $$$x \cdot 2^b$$$, like here we had
(13 << 2) = 52.
Similarly, the right shift
>> shifts bits to the right and some bits might disappear this way, like bits
01 in the example above. An expression
x >> b is equal to the floor of $$$\frac{x}{2^b}$$$. It's more complicated for negative numbers but we won't discuss it.
So what can we do?
$$$2^k$$$ is just
1 << k or
1LL << k if you need long longs. Such a number has binary representation like
10000 and its AND with any number $$$x$$$ can have at most one bit on (one bit equal to $$$1$$$). This way we can check if some bit is on in number $$$x$$$. The following code finds ones in the binary representation of $$$x$$$, assuming that $$$x \in [0, 10^9]$$$:
for(int i = 0; i < 30; i++) if((x & (1 << i)) != 0) cout << i << " ";
(we don't have to check $$$i = 30$$$ because $$$2^{30} > x$$$)
And let's do that slightly better, stopping for too big bits, and using the fact that
if(value) checks if
value is non-zero in C++.
for(int i = 0; (1 << i) <= x; i++) if(x & (1 << i)) cout << i << " ";
Consider this problem: You are given $$$N \leq 20$$$ numbers, each up to $$$10^9$$$. Is there a subset with sum equal to given goal $$$S$$$?
It can be solved with recursion but there's a very elegant iterative approach that iterates over every number $$$x$$$ from $$$0$$$ to $$$2^n - 1$$$ and considers $$$x$$$ to be a binary number of length $$$n$$$, where bit $$$1$$$ means taking a number and bit $$$0$$$ is not taking. Understanding this is crucial to solve any harder problems with bitwise operations. Analyze the following code and then try to write it yourself from scratch without looking at mine.
for(int mask = 0; mask < (1 << n); mask++) { long long sum_of_this_subset = 0; for(int i = 0; i < n; i++) { if(mask & (1 << i)) { sum_of_this_subset += a[i]; } } if(sum_of_this_subset == S) { puts("YES"); return 0; } } puts("NO");
Two easy problems where you can practice iterating over all $$$2^N$$$ possibilities:
-
-
Speed
Time complexity of every bitwise operation is $$$O(1)$$$. These operations are very very fast (well, popcount is just fast) and doing $$$10^9$$$ of them might fit in 1 second. You will later learn about bitsets which often produce complexity like $$$O(\frac{n^2}{32})$$$, good enough to pass constraints $$$n \leq 10^5$$$.
I will welcome any feedback. Coming next: popcount, bitsets, dp with bitmasks.
I will also make a YT video on this. YT video link is at the top. | http://codeforces.com/blog/entry/73490 | CC-MAIN-2020-45 | refinedweb | 856 | 71.24 |
# Part 2: Upsetting Opinions about Static Analyzers

By writing the article "Upsetting Opinions about Static Analyzers" we were supposed to get it off our chest and peacefully let it all go. However, the article unexpectedly triggered robust feedback. Unfortunately, the discussion went in the wrong direction, and now we will make a second attempt to explain our view of this situation.
Joke to the topic
-----------------
It all started with the article "[Upsetting Opinions about Static Analyzers](https://www.viva64.com/en/b/0765/)". It came into a question on some resources and that discussion reminded me of an old joke.
> Some tough Alabama loggers once got a fancy Japanese gas chainsaw.
>
> They gathered around and decided to give it a try.
>
> Started it up and shoved it a thin tree.
>
> "Whoosh," said the Japanese saw.
>
> "Oh, my ..." said the loggers.
>
> They gave it a thicker tree. "Whoooosh!", said the saw.
>
> "Ugh, damn it!" said the loggers.
>
> A thick cedar came next. "WH-WH-WH-WH-OOOOOOSH!!!" said the saw.
>
> "Wow, holy shit!!"— said the loggers.
>
> They gave it a crowbar. "BANG!" said the saw.
>
> "Oh crap, gotcha!!!"— reproachfully yelled tough Alabama loggers! And went to cut down forest with axes...
This story is just one and the same. People looked at this code:
```
if (A[0] == 0)
{
X = Y;
if (A[0] == 0)
....
}
```
And started coming up with cases when it might be justified, which means that the PVS-Studio analyzer warning was a false-positive. Some speculations about the change in memory between two checks came into play which occurs due to:
* running parallel threads;
* signal/interrupt handlers;
* the variable *X* is a reference to the element *A[0]*;
* hardware, such as performing DMA operations;
* and so on.
After heated debate on the analyzer's inability to comprehend all cases, they left to cut down forest with axes. In other words, they found an excuse why they could still avoid using a static code analyzer in their work.
Our view of this case
---------------------
This approach is counterproductive. An imperfect tool may well be useful, and its use will be economically feasible.
Yes, any static analyzer issues false-positive warnings. There's nothing we can do about it. However, this misfortune is greatly exaggerated. In practice, static analyzers can be configured and used in various ways to suppress and deal with false positives (see [1](https://www.viva64.com/en/b/0743/), [2](https://www.viva64.com/en/m/0040/), [3](https://www.viva64.com/en/b/0523/), [4](https://habr.com/en/post/440610/)). In addition, it is fitting here to recall the article "[False positives are our enemies, but may still be your friends](https://blog.sonarsource.com/false-positives-our-enemies-but-maybe-your-friends)".
On the other hand, even this is not the main thing. **Special cases of exotic code do not make sense to consider at all!** Can complex code confuse the analyzer? Yes, it can. At the same time, for one such case, there will be hundreds of useful analyzer findings. You can find and fix a lot of errors at the earliest stage. As for one or two false positives, they will be safely suppressed and will not bother you any longer.
PVS-Studio is right once again
------------------------------
This is where the article could end. Nevertheless, some may consider the previous section to be not rational considerations, but attempts to hide the weaknesses and shortcomings of the PVS-Studio tool. So, we'll have to continue.
Let's take look at the actual compiled [code](https://godbolt.org/z/q1fq14) with variable declarations:
```
void SetSynchronizeVar(int *);
int foo()
{
int flag = 0;
SetSynchronizeVar(&flag);
int X, Y = 1;
if (flag == 0)
{
X = Y;
if (flag == 0)
return 1;
}
return 2;
}
```
The PVS-Studio analyzer reasonably issues a warning: V547 Expression 'flag == 0' is always true.
It is perfectly right. If someone starts ranting that a variable can change in another thread, in a signal handler, and so on, they just don't understand the C and C++ language. You just mustn't write code in such a way.
The compiler has the right to throw out the second check for optimization purposes and will be absolutely right. From the language point of view, the variable can't change. Its background change is nothing more than undefined behavior.
For the check to remain in place, the variable must be declared as *volatile*:
```
void SetSynchronizeVar(volatile int *);
int foo()
{
volatile int flag = 0;
SetSynchronizeVar(&flag);
....
}
```
The PVS-Studio analyzer knows about this and no longer issues a warning for such [code](https://godbolt.org/z/bb3srf).
Here we go back to what was discussed in the [first article](https://www.viva64.com/en/b/0765/). There is no problem. Whereas what we have here is criticism or misunderstanding as to why the analyzer has the right to issue a warning.
Note for the most meticulous readers
------------------------------------
Some readers may return to the synthetic example from the first article:
```
char get();
int foo(char *p, bool arg)
{
if (p[1] == 1)
{
if (arg)
p[0] = get();
if (p[1] == 1) // Warning
return 1;
}
// ....
return 3;
}
```
And add *volatile*:
```
char get();
int foo(volatile char *p, bool arg)
{
if (p[1] == 1)
{
if (arg)
p[0] = get();
if (p[1] == 1) // Warning :-(
return 1;
}
// ....
return 3;
}
```
After that, it is fair to note that the analyzer still issues the warning V547 Expression 'p[1] == 1' is always true.
Hooray, finally the analyzer is obviously wrong :). This is a false positive!
As you see, we do not hide any shortcomings. When analyzing the data flow for array elements, this unfortunate *volatile* was lost. This flaw has already been found and fixed. The edit will be available in the next analyzer version. There will be no false positive.
Why wasn't this bug detected earlier? Because in fact, this is again contrived code that is not found in real projects. Truth be told, we haven't seen such code yet, although we have checked a lot of [open projects](https://www.viva64.com/en/inspections/).
Why is the code unrealistic? First, in practice, there will be some sort of synchronization or delay function between the two checks. Second, no one in their right mind creates arrays consisting of volatile elements unless absolutely necessary. Working with such an array is a huge drop in performance.
Let's recap. You can easily create examples where the analyzer makes mistakes. But from a practical point of view, the identified flaws practically do not affect the quality of code analysis and the number of real errors detected. After all, the code of real applications is just code that is understandable to both the analyzer and the person, and is not a quiz or a puzzle. If the code is a puzzle, then there are other things to worry about :).
Thanks for your attention.
[](https://viva64.com/en/pvs-studio-download/?utm_source=habr&utm_medium=banner&utm_campaign=0767_Sadness_2)
Additional links
----------------
* [How to introduce a static code analyzer in a legacy project and not to discourage the team](https://www.viva64.com/en/b/0743/).
* [Additional diagnostics configuration](https://www.viva64.com/en/m/0040/).
* [Characteristics of PVS-Studio Analyzer by the Example of EFL Core Libraries, 10-15% of False Positives](https://www.viva64.com/en/b/0523/).
* [Introduce Static Analysis in the Process, Don't Just Search for Bugs with It](https://habr.com/en/post/440610/). | https://habr.com/ru/post/523692/ | null | null | 1,270 | 58.58 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.