text
stringlengths
256
65.5k
i have an application where we allow users to use Oauth2 for authentication and even Custom User Registrations. All the Users are saved into the default User entity in the datastore. If the user is logging in using Oauth2 for the first time a new record in the default User entity is created like this: """Check if user is already logged in""" if self.logged_in: logging.info('User Already Logged In. Updating User Login Information') u = self.current_user u.auth_ids.append(auth_id) u.populate(**self._to_user_model_attrs(data, self.USER_ATTRS[provider])) u.put() else: """Create a New User""" logging.info('Creating a New User') ok, user = self.auth.store.user_model.create_user(auth_id, **self._to_user_model_attrs(data, self.USER_ATTRS[provider])) if ok: self.auth.set_session( self.auth.store.user_to_dict(user) ) self.redirect(continue_url) for custom registrations records are inserted through the following handler. class RegistrationHandler(TemplateHandler, SimpleAuthHandler): def get(self): self.render('register.html') def post(self): """Process registration form.""" user = 'appname:%s' % self.request.get('email') name = '%s %s' % (self.request.get('first_name'), self.request.get('last_name')) password = self.request.get('password') avatar = self.request.get('avatar') act_url = user_activation.Activate(self.request.get('first_name'), self.request.get('email')) ok, user = User.create_user(auth_id=user, name=name, password_raw=password, email=self.request.get('email')) if ok: self.auth.set_session(self.auth.store.user_to_dict(user)) acc = models.Account(display_name=self.request.get('first_name'), act_url=act_url, act_key=act_url.split('activate/')[1], user=users.User(User.get_by_auth_id(self.current_user.auth_ids[0]).email)) acc.put() if avatar: avt = models.Picture(is_avatar=True, is_approved=True, image=avatar, user=users.User(User.get_by_auth_id(self.current_user.auth_ids[0]).email)) avt.put() self.redirect('/') Now we are using webapp2_extras.sessions for session handling. We have different models like, Comments, Images, Reviews etc in which we want to use db.UserProperty() as the author field. However, the author field shows blank or None whenever we enter a record into any of these models using 'users.get_current_user()'. I think this is because we are handling the sessions through webapp2 sessions. What we want to achieve is to be able to use the db.UserProperty field in various models and link appropriately to the current user using webapp2 sessions ? the UserProperty() has to be passed with a User Object in order for it to properly insert the records. Even though we are able to enter the records using the following code : user = users.User(User.get_by_auth_id(self.current_user.auth_ids[0]).email) or user = users.User(User.get_by_auth_id(self.current_user.auth_ids[0]).name) but then we are not able to get the whole user object by referencing to model.author Any ideas how we should achieve this ?
How can I add constant acceleration to this pendulum as a whole while using this code? The code right now is describing a pendulum, how would I alter it to describe a pendulum in a moving train (where the train has a constant acceleration)? Any help would be appreciated, thank you in advance. from math import sin, pi from time import sleep from turtle import * GA = 9.80665 # Gravitational Acceleration (meters per second squared) FORM = 'Time={:6.3f}, Angle={:6.3f}, Speed={:6.3f}' def main(): length = 10.0 # Of pendulum (meters) ngol = - GA / length # Negative G over L total_time = 0.0 # Seconds angle = 1.0 # Initial angle of pendulum (radians) speed = 0.0 # Initial angular velocity (radians/second) time_step = 0.05 # Seconds acc = 1 while total_time < 30.0: total_time += time_step speed += ngol * sin(angle) * time_step angle += speed * time_step #print(FORM.format(total_time, angle, speed)) if draw(angle, length): break sleep(time_step) def init(): setup() mode('logo') radians() speed(0) hideturtle() tracer(False) penup() def draw(angle, length): if speed() != 0: return True clear() setheading(angle + pi) pensize(max(round(length), 1)) pendown() forward(length * 25) penup() dot(length * 10) home() update() if __name__ == '__main__': init() main() bye()
I'm trying to build a simple API with the bottle.py (Bottle v0.11.4) web framework. To 'daemonize' the app on my server (Ubuntu 10.04.4), I'm running the shell nohup python test.py & , where test.py is the following python script: import sys import bottle from bottle import route, run, request, response, abort, hook @hook('after_request') def enable_cors(): response.headers['Access-Control-Allow-Origin'] = '*' @route('/') def ping(): return 'Up and running!' if __name__ == '__main__': run(host=<my_ip>, port=3000) I'm running into the following issue: This works initially but the server stops responding after some time (~24hours). Unfortunately, the logs don't contain any revealing error messages. The only way I have been able to reproduce the issue is when I try to run a second script on my Ubuntu server that creates another server listening to a different port (ie.: exactly the same script as above but port=3001). If I send a request to the newly created server, I also do not get a response and the connection eventually times out. Any suggestions are greatly appreciated. I'm new to this, so if there's something fundamentally wrong with this approach, any links to reference guides would also be appreciated. Thank you!
Shanx Re : /* Topic des codeurs [8] */ Gnagnagna... J'ai corrigé l'indentation, par contre pour découper le main je vais attendre que le programme fonctionne... /**** PENDU ****/ #include<stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <ctype.h> #define TAILLE_MAX 50 void clean_stdin(void); int main(void){ int i; /* compteur */ int j=0; /* Compteur de coups faux */ int k; /* compteur qui s'incrémente si la lettre proposée est dans le mot */ int l; /* compteur qui vérifie si on est au premier tour ou non */ char mot[TAILLE_MAX]=""; char caractereLu; int numMotChoisi, nombreMots; nombreMots = 0; FILE* fichier = NULL; fichier = fopen("mots.txt", "r"); /* On vérifie que le fichier existe */ if (fichier == NULL){ printf("Impossible d'ouvrir le fichier mots.txt"); return 0; } // On compte le nombre de mots dans le fichier (il suffit de compter les entrées \n while((caractereLu=fgetc(fichier)) != EOF){ if (caractereLu == '\n'){ nombreMots++; } } printf("Le num de mot est %d\n", nombreMots); srand(time(NULL)); numMotChoisi = (rand() % nombreMots); /* On choisit une ligne aléatoirement */ printf("Le num choisi est %d\n", numMotChoisi); /* On prend le mot choisi */ rewind(fichier); while (numMotChoisi > 0){ caractereLu = fgetc(fichier); if (caractereLu == '\n') numMotChoisi--; } fgets(mot, TAILLE_MAX, fichier); printf("%s", mot); fclose(fichier); mot[strlen(mot) - 1] = '\0'; int longueurMot = strlen(mot); char t[longueurMot]; /* tableau des lettres à deviner */ char lettre; /* lettre entrée par l'user */ int z=0; /* z != 0 quand l'user trouve le bon mot */ char lettresTestees[26]; /*ensemble des lettres déjà testées */ /* On remplace les lettres par des _ */ for(i=0 ; i < longueurMot ; i++){ t[i] = '_'; } while (z != longueurMot){ /* Affichage du mot à trouver avec le cas échéant les lettres déjà trouvées */ puts("Le mot à deviner est : "); for(i=0 ; i < longueurMot ; i++){ printf("%c ", t[i]); } puts("\n"); /* Affichage des lettres déjà testées */ if (l!=0){ puts("Les lettres déjà testées sont : "); for(i=0 ; i < j ; i++){ printf("%c ", lettresTestees[i]); } } puts("\n"); while(!islower(lettre)){ puts("Quelle lettre proposes-tu ?"); lettre=getchar(); } lettresTestees[l]=lettre; /* On entre la lettre dans le tableau des lettres testées */ clean_stdin(); for(i=0 ; i < longueurMot ; i++){ if(lettre == mot[i]){ /* Si la lettre est présente dans le mot */ t[i] = lettre; /* On remplace les _ correspondants par la lettre */ k++; } else{ j++; /* Sinon on incrémente le compteur de coups faux */ } } l++; /* l devient différent de 0 car ce n'est plus le premier tour */ if (k==0){ puts("Dommage !\n"); /* si la lettre n'était pas dans le mot */ } for (i=0; i<longueurMot ; i++) { /* On vérifie qu'on a pas trouvé le mot entier */ if (t[i] == mot[i]) z++; else { z = 0; } } } puts("Bravo !"); return 0; } void clean_stdin(void){ int c; do { c = getchar(); } while (c != '\n' && c != EOF); } J'ai quelques problème : là quelque soit la lettre que je rentre, il considère que j'ai rentré la même lettre qu'au premier tour. Genre si je rentre f, ensuite quoique je rentre il prendra f. Pour corriger ça il faut que j'utilise clean_stdin, mais alors le comportement devient complètement erratique... Ensuite la partie d'affichage des caractères déjà testés déconne joliment aussi... $ ./pendu Le num de mot est 3 Le num choisi est 2 lievre Le mot à deviner est : _ _ _ _ _ _ Quelle lettre proposes-tu ? f Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f e Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f f ~ z Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f f f ~  z Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f f f f ~  f Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f f f f f ~  @ j Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f f f f f f ~  @ l i e v Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : f f f f f f f ~  @ l i e v r e Oui, la réponse apparaît toute seule si on appuie plusieurs fois sur entrée... Dernière modification par Shanx (Le 18/11/2012, à 13:19) Hors ligne Pylades Re : /* Topic des codeurs [8] */ Gnagnagna... Je sais que c’est frustrant, mais si tu n’es pas rigoureux dès le départ, je ne vois pas quand tu le deviendras… J'ai corrigé l'indentation, par contre pour découper le main je vais attendre que le programme fonctionne... Voilà typiquement le genre de phrases qui font peur… “Any if-statement is a goto. As are all structured loops. “And sometimes structure is good. When it’s good, you should use it. “And sometimes structure is _bad_, and gets into the way, and using a goto is just much clearer.” Linus Torvalds – 12 janvier 2003 Hors ligne Shanx Re : /* Topic des codeurs [8] */ Shanx a écrit : Gnagnagna... Je sais que c’est frustrant, mais si tu n’es pas rigoureux dès le départ, je ne vois pas quand tu le deviendras… Shanx a écrit : J'ai corrigé l'indentation, par contre pour découper le main je vais attendre que le programme fonctionne... Voilà typiquement le genre de phrases qui font peur… J'ai édité mon post. Moi ça me parait logique d'optimiser après que ça marche, non ? Parce que sinon ce serait s'éparpiller, étant donné que pour séparer le main y'a quelques trucs que je ne sais pas faire... (Genre faire une fonction qui a un fichier déjà ouvert en argument). Hors ligne tshirtman Re : /* Topic des codeurs [8] */ Oui, découper est souvent le moyen le plus rapide de s'en sortir, et que ça fonctionne, justement. Hors ligne Pylades Re : /* Topic des codeurs [8] */ Moi ça me parait logique d'optimiser après que ça marche, non ? Mais ce n’est pas de l’optimisation, c’est de la clarté de base ! C’est comme pour ton buffer overflow, ce n’était pas de l’optimisation, mais de la sécurité de base. (Genre faire une fonction qui a un fichier déjà ouvert en argument). Où est le problème ? “Any if-statement is a goto. As are all structured loops. “And sometimes structure is good. When it’s good, you should use it. “And sometimes structure is _bad_, and gets into the way, and using a goto is just much clearer.” Linus Torvalds – 12 janvier 2003 Hors ligne The Uploader Re : /* Topic des codeurs [8] */ Quand on parle d'optimisation, c'est optimiser les performances (usage I/O, mémoire, CPU). Le reste ça ne s'appelle pas de l'optimisation, mais être propre et rigoureux. Passer de Ubuntu 10.04 à Xubuntu 12.04 LTS Archlinux + KDE sur ASUS N56VV. ALSA, SysV, DBus, Xorg = Windows 98 ! systemd, kdbus, ALSA + PulseAudio, Wayland = modern OS (10 years after Windows, but still...) ! Deal with it ! Hors ligne Shanx Re : /* Topic des codeurs [8] */ Nouveau code, qui ne fonctionne toujours pas : /**** PENDU ****/ #include<stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <ctype.h> #define TAILLE_MAX 50 void clean_stdin(void); char* choixMot(FILE* fichier); int main(void){ int i; /* compteur */ int j=0; /* Compteur de coups faux */ int k; /* compteur qui s'incrémente si la lettre proposée est dans le mot */ int l; /* compteur qui vérifie si on est au premier tour ou non */ char* mot; FILE* fichier = NULL; fichier = fopen("mots.txt", "r"); /* On vérifie que le fichier existe */ if (fichier == NULL){ printf("Impossible d'ouvrir le fichier mots.txt"); return 0; } mot=choixMot(fichier); int longueurMot = strlen(mot); char t[longueurMot]; /* tableau des lettres à deviner */ char lettre; /* lettre entrée par l'user */ int z=0; /* z != 0 quand l'user trouve le bon mot */ char lettresTestees[26]; /*ensemble des lettres déjà testées */ /* On remplace les lettres par des _ */ for(i=0 ; i < longueurMot ; i++){ t[i] = '_'; } while (z != longueurMot){ /* Affichage du mot à trouver avec le cas échéant les lettres déjà trouvées */ puts("Le mot à deviner est : "); for(i=0 ; i < longueurMot ; i++){ printf("%c ", t[i]); } puts("\n"); /* Affichage des lettres déjà testées */ if (l!=0){ puts("Les lettres déjà testées sont : "); for(i=0 ; i < j ; i++){ printf("%c ", lettresTestees[i]); } } puts("\n"); while(!islower(lettre)){ puts("Quelle lettre proposes-tu ?"); lettre=getchar(); } lettresTestees[l]=lettre; /* On entre la lettre dans le tableau des lettres testées */ clean_stdin(); for(i=0 ; i < longueurMot ; i++){ if(lettre == mot[i]){ /* Si la lettre est présente dans le mot */ t[i] = lettre; /* On remplace les _ correspondants par la lettre */ k++; } else{ j++; /* Sinon on incrémente le compteur de coups faux */ } } l++; /* l devient différent de 0 car ce n'est plus le premier tour */ if (k==0){ puts("Dommage !\n"); /* si la lettre n'était pas dans le mot */ } for (i=0; i<longueurMot ; i++) { /* On vérifie qu'on a pas trouvé le mot entier */ if (t[i] == mot[i]) z++; else { z = 0; } } } puts("Bravo !"); return 0; } void clean_stdin(void){ int c; do { c = getchar(); } while (c != '\n' && c != EOF); } char* choixMot(FILE* fichier){ char mot[TAILLE_MAX]=""; char caractereLu; int numMotChoisi, nombreMots; nombreMots = 0; // On compte le nombre de mots dans le fichier (il suffit de compter les entrées \n while((caractereLu=fgetc(fichier)) != EOF){ if (caractereLu == '\n'){ nombreMots++; } } printf("Le num de mot est %d\n", nombreMots); srand(time(NULL)); numMotChoisi = (rand() % nombreMots); /* On choisit une ligne aléatoirement */ printf("Le num choisi est %d\n", numMotChoisi); /* On prend le mot choisi */ rewind(fichier); while (numMotChoisi > 0){ caractereLu = fgetc(fichier); if (caractereLu == '\n') numMotChoisi--; } fgets(mot, TAILLE_MAX, fichier); printf("%s", mot); fclose(fichier); mot[strlen(mot) - 1] = '\0'; return mot; } Déjà j'ai une erreur à la compilation : pendu.c: In function ‘choixMot’: pendu.c:131:5: warning: function returns address of local variable [enabled by default] Ensuite ça se comporte très bizarrement... $ ./pendu Le num de mot est 3 Le num choisi est 1 maison Le mot à deviner est : _ _ _ _ _ _ Quelle lettre proposes-tu ? r Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r K n  m Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r n  W V Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r  W V Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r  W V Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r  W V J n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r W V J n  P J n Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r W V J n  P J n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r W V J n  P J n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r W V J n  P J n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r V J n  P J n  J n Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r J n  P J n  J n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r J n  P J n  J n  L Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r J n  P J n  J n  L Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r J n  P J n  J n  L r Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r J n  P J n  J n  L r Z Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r J n  P J n  J n  L r ` \ Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r J n  P J n  J n  L r f \ K n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r l \ K n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r r \ K n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r x \ K n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r ~ \ K n  =  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r \ K n  =  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r \ K n  =  K Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r \ K n  =  K n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r J n  P J n  J n  L r \ K n  =  K n  Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r n  P J n  J n  L r \ K n  =  K n  @ Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r  P J n  J n  L r \ K n  =  K n  @ Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r r  P J n  J n  L r \ K n  =  K n  @ Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r r r  P J n  J n  L r \ K n  =  K n  @ W u ! Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r P J n  J n  L r @ K n  =  K n  @ W u ! c Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r P J n  J n  L r @ K  =  K n  @ W u ! c Dommage ! Le mot à deviner est : _ _ _ _ _ _ Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r P J n  J n  L r @ K n  =  K n  @ W u ! c Dommage ! Le mot à deviner est : r r r r r r Les lettres déjà testées sont : r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r J n  J n  L r ! @ K n  =  K n  @ W u ! c Dommage ! zsh: segmentation fault ./pendu Hors ligne nathéo Re : /* Topic des codeurs [8] */ Salut o/ Dites est-ce qu'il y en a ici qui savent comment avoir un équivalent du main() en ruby ? C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne grim7reaper Re : /* Topic des codeurs [8] */ Un peu comme en Python : if __FILE__ == $PROGRAM_NAME # Put "main" code here end Ou if __FILE__ == $0 # Put "main" code here end Déjà j'ai une erreur à la compilation : pendu.c: In function ‘choixMot’: pendu.c:131:5: warning: function returns address of local variable [enabled by default] Normal. Tout est dans le message.mot est déclaré dans choixMot, c’est une variable locale. Elle cesse d’exister à la sortie de la fonction, et toi tu renvoies son adresse (les tableaux, pour des raisons de performances, sont toujours passé par adresse, pas par valeur). Donc tu renvoies l’adresse d’un truc inexistant, et tu l’utilises donc forcément ça va mal se passer (d’où ton comportement bizarre). Tu as trois solutions : - déclarer mot en static, très moche ici. On va éviter hein :] - allocation dynamique de mot. - passer mot en argument (comme il est passé par adresse, il sera bien modifé). Dernière modification par grim7reaper (Le 18/11/2012, à 14:25) Hors ligne nathéo Re : /* Topic des codeurs [8] */ Merci grim, mais du coup les instruction sont placés avant le end, non ? C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne tshirtman Re : /* Topic des codeurs [8] */ ça semble évident oui… Hors ligne :!pakman Re : /* Topic des codeurs [8] */ ... Hors ligne nathéo Re : /* Topic des codeurs [8] */ Parfois suivre la logique peut jouer un mauvais tour... Dernière modification par nathéo (Le 18/11/2012, à 15:23) C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne xapantu Re : /* Topic des codeurs [8] */ xapantu a écrit :afilmore a écrit : Il y a parfois quelques bugs dans les bindings, toujours quelques warnings pendant la compilation du code C mais ça c'est rien par rapport aux messages de "déprécation" des mecs de GTK+. Boarf, ça se discute, une fois que tu as quelques dizaines de milliers de lignes de code, il y a quand même pas mal de warning Pas si tu codes proprement… Tu peux en avoir quelques uns ouais, si ta ligne de compil’ est super stricte, mais sinon c’est que tu codes de manière un peu à l’arrache. Je parlais du code en C généré par valac, pas de code directement en C, hein (à moins que toi aussi ?) Hors ligne Rolinh Re : /* Topic des codeurs [8] */ Parfois suivre la logique peut jouer un mauvais tour... Ça ne concerne même pas la logique là, juste le bon sens. C'est comme si tu demandais si tu devais mettre le contenu de tes fonctions C après la paire d'accolades... Blog "If you put a Unix shell to your ear, do you hear the C ?" Hors ligne grim7reaper Re : /* Topic des codeurs [8] */ grim7reaper a écrit :xapantu a écrit : Boarf, ça se discute, une fois que tu as quelques dizaines de milliers de lignes de code, il y a quand même pas mal de warning Pas si tu codes proprement… Tu peux en avoir quelques uns ouais, si ta ligne de compil’ est super stricte, mais sinon c’est que tu codes de manière un peu à l’arrache. Je parlais du code en C généré par valac, pas de code directement en C, hein (à moins que toi aussi ?) Non j’étais à côté de la plaque, j’ai répondu un peu vite ^^ Hors ligne The Uploader Re : /* Topic des codeurs [8] */ Merci grim, mais du coup les instruction sont placés avant le end, non ? Juste ce qu'il faut pour lancer le bordel est dans le if qui sert de "bootstrap", ça t'empêche pas d'être propre. (just sayin') Passer de Ubuntu 10.04 à Xubuntu 12.04 LTS Archlinux + KDE sur ASUS N56VV. ALSA, SysV, DBus, Xorg = Windows 98 ! systemd, kdbus, ALSA + PulseAudio, Wayland = modern OS (10 years after Windows, but still...) ! Deal with it ! Hors ligne nathéo Re : /* Topic des codeurs [8] */ Au du coup pour remplacer argc/argv on fait comment exactement ? (mes recherches sur le net ne m'aident pas beaucoup ) C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne afilmore Re : /* Topic des codeurs [8] */ Le défi actuel est toujours :bah voui, faut croire que les défis sont secondaires :-P grim7reaper a écrit : Je propose la mise au point d’une bibliothèque ou d’un module ou truc du genre (faut que le code soit réutilisable ailleurs quoi) qui propose des fonctions pour récupérer des quotes sur Internet. Vous êtes libre de choisir les sites que vous voulez gérez dans votre code (VDM, DTC, PEBKAC, etc.), au niveau de leur nombre (gestion d’un seul site ou de plusieurs) aussi et au niveau des options offertes (par exemple, pour VDM vous pouvez soit toujours tirer une quote aléatoire soit offrir la possibilité de choisir sa catégorie). Le seul truc que je fixe c’est le format de sortie : vos fonctions doivent fournir au final juste la quote en texte simple. Comme ça, c’est plus souple pour la réutilisation ou la combinaison avec d’autres trucs (pour donner des trucs comme ça par exemple). Défi relevé git clone git://github.com/afilmore/vdm-get.git cd vdm-get sudo apt-get install libglib2.0-dev libxml2-dev ./autogen.sh && ./configure --prefix=/usr && make ./src/vdm-get Il faut autoconf, automake, vala et surement d'autres trucs que j'ai oublié de préciser. Le programme récupère une page sur VDM, enregistre le contenu dans /tmp/vdm.html, et récupère un post aléatoire qui sera sauvegardé au format texte dans $HOME/vdm.txt Voili. Hors ligne The Uploader Re : /* Topic des codeurs [8] */ Au du coup pour remplacer argc/argv on fait comment exactement ? (mes recherches sur le net ne m'aident pas beaucoup ) ARGV Any command-line arguments after the program filename are available to your Ruby program in the global array ARGV. For instance, invoking Ruby as % ruby -w ptest "Hello World" a1 1.6180 yields an ARGV array containing ["Hello World", a1, 1.6180]. There's a gotcha here for all you C programmers---ARGV[0] is the first argument to the program, not the program name. The name of the current program is available in the global variable $0. Dernière modification par The Uploader (Le 18/11/2012, à 18:56) Passer de Ubuntu 10.04 à Xubuntu 12.04 LTS Archlinux + KDE sur ASUS N56VV. ALSA, SysV, DBus, Xorg = Windows 98 ! systemd, kdbus, ALSA + PulseAudio, Wayland = modern OS (10 years after Windows, but still...) ! Deal with it ! Hors ligne grim7reaper Re : /* Topic des codeurs [8] */ Au du coup pour remplacer argc/argv on fait comment exactement ? (mes recherches sur le net ne m'aident pas beaucoup ) Toi, soit tu cherches pas vraiment, soit il faut vraiment que tu apprennes à utiliser un moteur de recherche -___-" Comme en Perl (sans les sigil), avec le tableau ARGV. Édit : grilled, de peu, mais grilled. @afilmore : je vais jeter un œil Ça serait mieux qu’il affiche sur stdout plutôt que d’écrire dans un fichier (si on veut vraiment un fichier, suffit de faire une redirection). Édit : hum /tmp/vdm.html:161: element div: validity error : ID ad_leaderboard already defined <div id="ad_leaderboard"><div class="leaderboard"><script type="text/javascript" ^ /tmp/vdm.html:219: HTML parser error : htmlParseEntityRef: expecting ';' w.amazon.fr/LAgenda-VDM-2012-2013-Didier-Guedj/dp/2350761231/?_encoding=UTF8&tag ^ /tmp/vdm.html:219: HTML parser error : htmlParseEntityRef: expecting ';' M-2012-2013-Didier-Guedj/dp/2350761231/?_encoding=UTF8&tag=viedemerd-21&linkCode ^ /tmp/vdm.html:219: HTML parser error : htmlParseEntityRef: expecting ';' 013-Didier-Guedj/dp/2350761231/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid ^ /tmp/vdm.html:219: HTML parser error : htmlParseEntityRef: expecting ';' /dp/2350761231/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338913477&camp ^ /tmp/vdm.html:219: HTML parser error : htmlParseEntityRef: expecting ';' 761231/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338913477&camp=1642&sr ^ /tmp/vdm.html:219: HTML parser error : htmlParseEntityRef: expecting ';' ding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338913477&camp=1642&sr=8-2&creative ^ /tmp/vdm.html:221: HTML parser error : htmlParseEntityRef: expecting ';' w.amazon.fr/LAgenda-VDM-2012-2013-Didier-Guedj/dp/2350761231/?_encoding=UTF8&tag ^ /tmp/vdm.html:221: HTML parser error : htmlParseEntityRef: expecting ';' M-2012-2013-Didier-Guedj/dp/2350761231/?_encoding=UTF8&tag=viedemerd-21&linkCode ^ /tmp/vdm.html:221: HTML parser error : htmlParseEntityRef: expecting ';' 013-Didier-Guedj/dp/2350761231/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid ^ /tmp/vdm.html:221: HTML parser error : htmlParseEntityRef: expecting ';' /dp/2350761231/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338913477&camp ^ /tmp/vdm.html:221: HTML parser error : htmlParseEntityRef: expecting ';' 761231/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338913477&camp=1642&sr ^ /tmp/vdm.html:221: HTML parser error : htmlParseEntityRef: expecting ';' ding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338913477&camp=1642&sr=8-2&creative ^ /tmp/vdm.html:225: HTML parser error : htmlParseEntityRef: expecting ';' http://www.amazon.fr/Vie-merde-Tome-En-bagnole/dp/287442773X/?_encoding=UTF8&tag ^ /tmp/vdm.html:225: HTML parser error : htmlParseEntityRef: expecting ';' ie-merde-Tome-En-bagnole/dp/287442773X/?_encoding=UTF8&tag=viedemerd-21&linkCode ^ /tmp/vdm.html:225: HTML parser error : htmlParseEntityRef: expecting ';' -Tome-En-bagnole/dp/287442773X/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid ^ /tmp/vdm.html:225: HTML parser error : htmlParseEntityRef: expecting ';' /dp/287442773X/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338971760&camp ^ /tmp/vdm.html:225: HTML parser error : htmlParseEntityRef: expecting ';' 42773X/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338971760&camp=1642&sr ^ /tmp/vdm.html:225: HTML parser error : htmlParseEntityRef: expecting ';' TF8&tag=viedemerd-21&linkCode=ur2&qid=1338971760&camp=1642&sr=8-1-spell&creative ^ /tmp/vdm.html:226: HTML parser error : htmlParseEntityRef: expecting ';' http://www.amazon.fr/Vie-merde-Tome-En-bagnole/dp/287442773X/?_encoding=UTF8&tag ^ /tmp/vdm.html:226: HTML parser error : htmlParseEntityRef: expecting ';' ie-merde-Tome-En-bagnole/dp/287442773X/?_encoding=UTF8&tag=viedemerd-21&linkCode ^ /tmp/vdm.html:226: HTML parser error : htmlParseEntityRef: expecting ';' -Tome-En-bagnole/dp/287442773X/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid ^ /tmp/vdm.html:226: HTML parser error : htmlParseEntityRef: expecting ';' /dp/287442773X/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338971760&camp ^ /tmp/vdm.html:226: HTML parser error : htmlParseEntityRef: expecting ';' 42773X/?_encoding=UTF8&tag=viedemerd-21&linkCode=ur2&qid=1338971760&camp=1642&sr ^ /tmp/vdm.html:226: HTML parser error : htmlParseEntityRef: expecting ';' TF8&tag=viedemerd-21&linkCode=ur2&qid=1338971760&camp=1642&sr=8-1-spell&creative ^ /tmp/vdm.html:245: element div: validity error : ID wrapper already defined <div id="wrapper"> ^ Dernière modification par grim7reaper (Le 18/11/2012, à 19:05) Hors ligne afilmore Re : /* Topic des codeurs [8] */ Oui, en plus des warnings de vala, on a le droit aux warnings de libxml2. On peut pas spécifier d'options avec parse_file : http://unstable.valadoc.org/#!api=libxm … parse_file Sinon, il faut utiliser read_doc : http://unstable.valadoc.org/#!api=libxm … c.read_doc Avec comme options, NOERROR, NOWARNING : http://unstable.valadoc.org/#!api=libxm … rserOption Mais dans ce cas, on doit lire le fichier avant dans un buffer : read_doc (string cur, string url, string? encoding = null, int options = 0) "cur" doit être un buffer, pas le nom du fichier html comme avec parse_file. Voila ce que ça donne : https://github.com/afilmore/vdm-get/com … c69f695201 Pour une raison étrange j'ai des caractères non reconnus dans le terminal, alors que dans le fichier $HOME/vdm.txt les caractères accentués posent pas de problèmes. Et une solution vite fait mal fait pour afficher le résultat dans un terminal sans problème de caractères accentués : https://github.com/afilmore/vdm-get/com … 0542b3367d Aujourd'hui, comme depuis toujours, je ne suis pas très doué pour tout ce qui touche à l'informatique. C'est mon ex-copine qui a dû m'expliquer comment mettre mon profil Facebook en "célibataire". VDM Excellant Dernière modification par afilmore (Le 18/11/2012, à 22:29) Hors ligne nathéo Re : /* Topic des codeurs [8] */ J'ai une autre question Quand un argument est contenu dans l'une des case d'argv, il est de type char non ? Du coup, il faut le convertir en entier, si on veut s'en servir comme chiffre. Seulement quand je fais des recherches sur le net la solution que je trouve est de faire un integer(string) Mais d'après mes test je n'ai pas l'impression que ça fonctionne avec une case de tableau... C'est rarement par le sarcasme qu'on élève son âme.Le jus de la vigne clarifie l'esprit et l'entendement. De quoi souffres-tu ? De l'irréel intact dans le réel dévasté ? N'oubliez pas d'ajouter un [RESOLU] si votre problème est réglé.ᥟathé൭о Hors ligne xapantu Re : /* Topic des codeurs [8] */ Ça, ça va probablement te renvoyer le int, pas te modifier ton tableau Hors ligne
What you're referring to is called "constrained optimization". This is a really well-studied branch of evolutionary computation, and you can find dozens of resources describing common techniques for solving these problems. Carlos Coello-Coello does a tutorial on the subject each year at GECCO, the primary EC conference in the field. I found slides from one such session available here (ftp://ftp.cs.bham.ac.uk/.snapshot/nightly.1/pub/authors/W.B.Langdon/biblio/gecco2008/docs/p2445.pdf). You might find that a useful jumping-off point for further study. In short, there are two high-level approaches that are by far the most common: penalization and repair. In penalization, you decrease the fitness of solutions by some function of how badly they violate the constraints. In repair, you actually modify infeasible solutions directly to attempt to move them into the feasible region. Penalization works well if you can get the penalty scheme right, but that can be a tricky problem to solve, and it's essentially always problem dependent. One option is to just mess around with your fitness function and figure out the right amount to penalize so that "good" infeasible solutions get just enough fitness to not die immediately but not so much that they become preferred over feasible solutions. Another method is to just replace the selection criteria to account for infeasible solutions directly. That is, instead of if fitness(p1) > fitness(p2): # p1 is better else: # p2 is better you have something like if (is_feasible(p1) and is_feasible(p2)) or (not is_feasible(p1) and not is_feasible(p2)): if fitness(p1) > fitness(p2): # p1 is better else: # p2 is better else: if is_feasible(p1): # p1 is better else: # p2 is better This explicitly handles all the ways of comparing feasible and infeasible solutions, and uses their fitness values only to break ties. Now it doesn't matter so much if the raw fitness value of an infeasible solution is higher than a feasible solution due to weirdness in your penalization scheme. Repair is often preferable if you can do it (it isn't always possible to build an effective repair operator). By repair, we mean you take the infeasible solution and actually modify it so that it falls within your feasible region. This could be as simple as just thresholding the value within the allowed range, but more often will require some sort of domain specific or heuristic search. For example, if you have a knapsack problem and a solution has too much weight in the knapsack, a repair operator might randomly start throwing items out of the knapsack until the total weight was less than the capacity. A better operator might bias the removal of items toward those with the highest weight/value ratio. It's usually beneficial for the repair to have some randomness involved -- you may want to bias things slightly, but you don't want to always do a greedy repair. There are other approaches as well. One recent innovation has been to treat each constraint as a separate objective function and employ multi-objective optimization algorithms to find a range of trade-off solutions. For some problems, this has been shown to be effective, but I think the general rule of thumb should probably be to first see what can be done with the simpler methods like penalization and repair.
When trying to refresh a materialized view based on a query that uses a mssql table (using the SqlServer Gateway), I run into the following error: After some investigation I found this was caused by a “NOT NULL” constraint on the mview-table. A small scenario: ORA-12008: error in materialized view refresh path ORA-01400: cannot insert NULL into (%s) ORA-02063: preceding line from MSSQLDB1 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2537 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2743 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2712 STEP 1: Create a Materialized View When trying to refresh a materialized view based on a query that uses a mssql table (using the SqlServer Gateway), I run into the following error: After some investigation I found this was caused by a “NOT NULL” constraint on the mview-table. A small scenario: SQL> CREATE MATERIALIZED VIEW COUNTY_IS_NULL 2 REFRESH FORCE ON DEMAND 3 AS 4 SELECT a.agreementnum 5 ,a.county county_id 6 ,a.createddate agree_cre_date 7 FROM bmssa.EXU_AGREEMENTTABLE@mssqldb1 a 8 WHERE a.agreementnum = ' E00000001'; Materialized view created. STEP 2: View Content of the Materialized View (County_Id is null!) SQL> SQL> SELECT * FROM county_is_null; AGREEMENTN COUNTY_ID AGREE_CRE ---------- ---------- --------- E00000001 05-JUL-07   SQL> SQL> SELECT NVL(county_id, 0) FROM county_is_null; NVL(COUNTY ---------- 0 STEP 3: Refresh the Materialized View SQL> SQL> BEGIN 2 dbms_mview.refresh('COUNTY_IS_NULL'); 3 END; 4 / BEGIN * ERROR at line 1: ORA-12008: error in materialized view refresh path ORA-01400: cannot insert NULL into (%s) ORA-02063: preceding line from MSSQLDB1 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2537 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2743 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2712 ORA-06512: at line 2 STEP 4: Disable the NOT NULL Constraint on COUNTY_ID SQL> SQL> SELECT constraint_name, search_condition FROM user_constraints c WHERE c.table_name='COUNTY_IS_NULL'; CONSTRAINT_NAME SEARCH_CONDITION ------------------------------ ------------------------------------------------- SYS_C0029024 "AGREEMENTNUM" IS NOT NULL SYS_C0029025 "COUNTY_ID" IS NOT NULL SYS_C0029026 "AGREE_CRE_DATE" IS NOT NULL SQL> SQL> ALTER TABLE county_is_null MODIFY CONSTRAINT sys_c0029025 DISABLE; Table altered. STEP 5: Refresh the Materialized View SQL> SQL> BEGIN 2 dbms_mview.refresh('COUNTY_IS_NULL'); 3 END; 4 / PL/SQL procedure successfully completed. SQL>
Routers Resource routing allows you to quickly declare all of the common routes for a given resourceful controller. Instead of declaring separate routes for your index... a resourceful route declares them in a single line of code. Some Web frameworks such as Rails provide functionality for automatically determining how the URLs for an application should be mapped to the logic that deals with handling incoming requests. REST framework adds support for automatic URL routing to Django, and provides you with a simple, quick and consistent way of wiring your view logic to a set of URLs. Usage Here's an example of a simple URL conf, that uses SimpleRouter. from rest_framework import routers router = routers.SimpleRouter() router.register(r'users', UserViewSet) router.register(r'accounts', AccountViewSet) urlpatterns = router.urls There are two mandatory arguments to the register() method: prefix- The URL prefix to use for this set of routes. viewset- The viewset class. Optionally, you may also specify an additional argument: base_name- The base to use for the URL names that are created. If unset the basename will be automatically generated based on themodelorquerysetattribute on the viewset, if it has one. Note that if the viewset does not include amodelorquerysetattribute then you must setbase_namewhen registering the viewset. The example above would generate the following URL patterns: URL pattern: ^users/$Name:'user-list' URL pattern: ^users/{pk}/$Name:'user-detail' URL pattern: ^accounts/$Name:'account-list' URL pattern: ^accounts/{pk}/$Name:'account-detail' Note: The base_name argument is used to specify the initial part of the view name pattern. In the example above, that's the user or account part. Typically you won't need to specify the base-name argument, but if you have a viewset where you've defined a custom get_queryset method, then the viewset may not have a .queryset attribute set. If you try to register that viewset you'll see an error like this: 'base_name' argument not specified, and could not automatically determine the name from the viewset, as it does not have a '.queryset' attribute. This means you'll need to explicitly set the base_name argument when registering the viewset, as it could not be automatically determined from the model name. Extra link and actions Any methods on the viewset decorated with @detail_route or @list_route will also be routed.For example, given a method like this on the UserViewSet class: from myapp.permissions import IsAdminOrIsSelf from rest_framework.decorators import detail_route class UserViewSet(ModelViewSet): ... @detail_route(methods=['post'], permission_classes=[IsAdminOrIsSelf]) def set_password(self, request, pk=None): ... The following URL pattern would additionally be generated: URL pattern: ^users/{pk}/set_password/$Name:'user-set-password' For more information see the viewset documentation on marking extra actions for routing. API Guide SimpleRouter This router includes routes for the standard set of list, create, retrieve, update, partial_update and destroy actions. The viewset can also mark additional methods to be routed, using the @detail_route or @list_route decorators. URL Style HTTP Method Action URL Name {prefix}/ GET list {basename}-list POST create {prefix}/{methodname}/ GET, or as specified by `methods` argument `@list_route` decorated method {basename}-{methodname} {prefix}/{lookup}/ GET retrieve {basename}-detail PUT update PATCH partial_update DELETE destroy {prefix}/{lookup}/{methodname}/ GET, or as specified by `methods` argument `@detail_route` decorated method {basename}-{methodname} By default the URLs created by SimpleRouter are appended with a trailing slash.This behavior can be modified by setting the trailing_slash argument to False when instantiating the router. For example: router = SimpleRouter(trailing_slash=False) Trailing slashes are conventional in Django, but are not used by default in some other frameworks such as Rails. Which style you choose to use is largely a matter of preference, although some javascript frameworks may expect a particular routing style. The router will match lookup values containing any characters except slashes and period characters. For a more restrictive (or lenient) lookup pattern, set the lookup_value_regex attribute on the viewset. For example, you can limit the lookup to valid UUIDs: class MyModelViewSet(mixins.RetrieveModelMixin, viewsets.GenericViewSet): lookup_field = 'my_model_id' lookup_value_regex = '[0-9a-f]{32}' DefaultRouter This router is similar to SimpleRouter as above, but additionally includes a default API root view, that returns a response containing hyperlinks to all the list views. It also generates routes for optional .json style format suffixes. URL Style HTTP Method Action URL Name [.format] GET automatically generated root view api-root {prefix}/[.format] GET list {basename}-list POST create {prefix}/{methodname}/[.format] GET, or as specified by `methods` argument `@list_route` decorated method {basename}-{methodname} {prefix}/{lookup}/[.format] GET retrieve {basename}-detail PUT update PATCH partial_update DELETE destroy {prefix}/{lookup}/{methodname}/[.format] GET, or as specified by `methods` argument `@detail_route` decorated method {basename}-{methodname} As with SimpleRouter the trailing slashes on the URL routes can be removed by setting the trailing_slash argument to False when instantiating the router. router = DefaultRouter(trailing_slash=False) Custom Routers Implementing a custom router isn't something you'd need to do very often, but it can be useful if you have specific requirements about how the your URLs for your API are structured. Doing so allows you to encapsulate the URL structure in a reusable way that ensures you don't have to write your URL patterns explicitly for each new view. The simplest way to implement a custom router is to subclass one of the existing router classes. The .routes attribute is used to template the URL patterns that will be mapped to each viewset. The .routes attribute is a list of Route named tuples. The arguments to the Route named tuple are: url: A string representing the URL to be routed. May include the following format strings: {prefix}- The URL prefix to use for this set of routes. {lookup}- The lookup field used to match against a single instance. {trailing_slash}- Either a '/' or an empty string, depending on thetrailing_slashargument. mapping: A mapping of HTTP method names to the view methods name: The name of the URL as used in reverse calls. May include the following format string: {basename}- The base to use for the URL names that are created. initkwargs: A dictionary of any additional arguments that should be passed when instantiating the view. Note that the suffix argument is reserved for identifying the viewset type, used when generating the view name and breadcrumb links. Customizing dynamic routes You can also customize how the @list_route and @detail_route decorators are routed.To route either or both of these decorators, include a DynamicListRoute and/or DynamicDetailRoute named tuple in the .routes list. The arguments to DynamicListRoute and DynamicDetailRoute are: url: A string representing the URL to be routed. May include the same format strings as Route, and additionally accepts the {methodname} and {methodnamehyphen} format strings. name: The name of the URL as used in reverse calls. May include the following format strings: {basename}, {methodname} and {methodnamehyphen}. initkwargs: A dictionary of any additional arguments that should be passed when instantiating the view. Example The following example will only route to the list and retrieve actions, and does not use the trailing slash convention. from rest_framework.routers import Route, DynamicDetailRoute, SimpleRouter class CustomReadOnlyRouter(SimpleRouter): """ A router for read-only APIs, which doesn't use trailing slashes. """ routes = [ Route( url=r'^{prefix}$', mapping={'get': 'list'}, name='{basename}-list', initkwargs={'suffix': 'List'} ), Route( url=r'^{prefix}/{lookup}$', mapping={'get': 'retrieve'}, name='{basename}-detail', initkwargs={'suffix': 'Detail'} ), DynamicDetailRoute( url=r'^{prefix}/{lookup}/{methodnamehyphen}$', name='{basename}-{methodnamehyphen}', initkwargs={} ) ] Let's take a look at the routes our CustomReadOnlyRouter would generate for a simple viewset. views.py: class UserViewSet(viewsets.ReadOnlyModelViewSet): """ A viewset that provides the standard actions """ queryset = User.objects.all() serializer_class = UserSerializer lookup_field = 'username' @detail_route() def group_names(self, request): """ Returns a list of all the group names that the given user belongs to. """ user = self.get_object() groups = user.groups.all() return Response([group.name for group in groups]) urls.py: router = CustomReadOnlyRouter() router.register('users', UserViewSet) urlpatterns = router.urls The following mappings would be generated... URL HTTP Method Action URL Name /users GET list user-list /users/{username} GET retrieve user-detail /users/{username}/group-names GET group_names user-group-names For another example of setting the .routes attribute, see the source code for the SimpleRouter class. Advanced custom routers If you want to provide totally custom behavior, you can override BaseRouter and override the get_urls(self) method. The method should inspect the registered viewsets and return a list of URL patterns. The registered prefix, viewset and basename tuples may be inspected by accessing the self.registry attribute. You may also want to override the get_default_base_name(self, viewset) method, or else always explicitly set the base_name argument when registering your viewsets with the router. Third Party Packages The following third party packages are also available. DRF Nested Routers The drf-nested-routers package provides routers and relationship fields for working with nested resources. wq.db The wq.db package provides an advanced Router class (and singleton instance) that extends DefaultRouter with a register_model() API. Much like Django's admin.site.register, the only required argument to app.router.register_model is a model class. Reasonable defaults for a url prefix and viewset will be inferred from the model and global configuration. from wq.db.rest import app from myapp.models import MyModel app.router.register_model(MyModel)
This is somewhat related to the question posed in this question but I'm trying to do this with an abstract base class. For the purposes of this example lets use these models: class Comic(models.Model): name = models.CharField(max_length=20) desc = models.CharField(max_length=100) volume = models.IntegerField() ... <50 other things that make up a Comic> class Meta: abstract = True class InkedComic(Comic): lines = models.IntegerField() class ColoredComic(Comic): colored = models.BooleanField(default=False) In the view lets say we get a reference to an InkedComic id since the tracer, err I mean, inker is done drawing the lines and it's time to add color. Once the view has added all the color we want to save a ColoredComic to the db. Obviously we could do inked = InkedComic.object.get(pk=ink_id) colored = ColoredComic() colored.name = inked.name etc, etc. But really it'd be nice to do: colored = ColoredComic(inked_comic=inked) colored.colored = True colored.save() I tried to do class ColoredComic(Comic): colored = models.BooleanField(default=False) def __init__(self, inked_comic = False, *args, **kwargs): super(ColoredComic, self).__init__(*args, **kwargs) if inked_comic: self.__dict__.update(inked_comic.__dict__) self.__dict__.update({'id': None}) # Remove pk field value but it turns out the ColoredComic.objects.get(pk=1) call sticks the pk into the inked_comic keyword, which is obviously not intended. (and actually results in a int does not have a exception)dict My brain is fried at this point, am I missing something obvious, or is there a better way to do this?
According to Django docs Django can be configured with FastCGI. Here's our setup (note that I don't control Apache setup at my workplace and I'm required to use FastCGI since we already have it, rather than install WSGI): The fcgi-relevant part of our apache conf is: LoadModule fastcgi_module modules/mod_fastcgi.so # IPC directory location # FastCgiIpcDir "/path/to/FastCGI_IPC" # General CGI config # FCGIConfig -idle-timeout 30 -listen-queue-depth 4 -maxProcesses 40 -minProcesses 10 -maxClassProcesses 2 -killInterval 60 -updateInterval 20 -singleThreshhold 0 -multiThreshhold 10 -processSlack 5 -failed-restart-delay 10 # To use FastCGI scripts: # AddHandler fastcgi-script .fcg .fcgi .fpl FastCgiServer "/path/to/my/django.fcgi" -listen-queue-depth 24 -processes 8 -restart-delay 1 -min-server-life 2 -failed-restart-delay 10 The last line should be most relevant. My django.fcgi is: #!/path/to/python-2.5/bin/python import sys, os open('pid', "w").write("%d" % (os.getpid())) # Add a custom Python path. sys.path.insert(0, "/path/to/django/") sys.path.insert(0, "/path/to/python2.5/site-packages/") # Switch to the directory of your project. (Optional.) os.chdir("/path/to/django/site") # Set the DJANGO_SETTINGS_MODULE environment variable. os.environ['DJANGO_SETTINGS_MODULE'] = "site.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") According to restarting the fcgi should be as simple as touch django.fcgi but for us it doesn't result in a restart (which is why I'm writing the pid's to files). Why doesn't the touch django.fcgi work?
help me to understand these errors Nov 25, 2013 at 8:52pm UTC machine.cpp: In member function `bool bottleStack::pop(int)': machine.cpp:24: invalid types `int[int]' for array subscript machine.cpp: In member function `bool bottleStack::peek(int)': machine.cpp:33: invalid types `int[int]' for array subscript machine.cpp: At global scope: machine.cpp:55: ISO C++ forbids declaration of `linkedstack' with no type machine.cpp:55: no `int linkedStack::linkedstack(int)' member function declared in class `linkedStack' machine.cpp: In member function `int linkedStack::linkedstack(int)': machine.cpp:56: invalid conversion from `int*' to `int' machine.cpp: At global scope: machine.cpp:60: declaration of `void linkedStack::pushlink(int)' outside of class is not definition machine.cpp:61: parse error before `{' token machine.cpp:63: syntax error before `->' token machine.cpp:64: syntax error before `->' token machine.cpp:65: ISO C++ forbids declaration of `front' with no type machine.cpp:65: `ptr' was not declared in this scope machine.cpp:67: parse error before `}' token machine.cpp:70: syntax error before `::' token machine.cpp:70: `DataType' was not declared in this scope machine.cpp:70: `poppedElement' was not declared in this scope machine.cpp:71: ISO C++ forbids declaration of `poplink' with no type machine.cpp:71: syntax error before `{' token machine.cpp:74: `DataType' was not declared in this scope machine.cpp:74: non-template type `Node' used as a template machine.cpp:74: ISO C++ forbids declaration of `ptr' with no type machine.cpp:74: base operand of `->' is not a pointer machine.cpp:75: ISO C++ forbids declaration of `poppedElement' with no type machine.cpp:75: request for member `info' in `*ptr', which is of non-aggregate type `int' machine.cpp:76: syntax error before `->' token machine.cpp:85: parse error before `{' token machine.cpp:88: ISO C++ forbids declaration of `topElement' with no type machine.cpp:88: base operand of `->' is not a pointer machine.cpp:89: parse error before `return' machine.cpp:92: non-member function `bool isEmptylink()' cannot have `const' method qualifier machine.cpp:93: parse error before `{' token main program file: #include <iostream> #include <string> #include <sstream> #include "Machines2.h" using namespace std; int main() { bottleStack bottle; linkedStack Can; bool x,y; char type; int selection; cout<<"\n Menu (order 1-4, 5 exit)"<<"\n"; cout<<"\n 0thers: Exit"; cout<<"\n 1: Add another soda to the machine "; cout<<"\n 2: Buy a soda "; cout<<"\n 3: Find a soda by name "; cout<<"\n 4: How much is the soda "; cout<<"\n 5: QUIT"; cout<<"\n\n Enter a choice : "; cin>>selection; while(selection != 5) { if (selection = 1) { cout << "How many bottle of soda would you like to add number of "<< endl; cin >> x; bottle.push(x); cout << "How many can of soda would you like to add number of " << endl; cin >> y; Can.pushlink(y); } if (selection = 2) { cout << "how many drinks would you like ?" << endl; cin >> x; cout << "Would you like a bottle(B) or Can(C)?"<< endl; cin >> type; if (type = 'B') { bottle.pop(x); } else Can.poplink(x); } if (selection = 5) { cout << "quit" <<endl; } } return 0; } header file: #include <iostream> #include <string> using namespace std; class bottleStack // Array Stack { public: bottleStack(); void push(int pushelement); bool pop( int popelement ); bool peek(int peekelement); bool isEmpty( ) const; void makeEmpty( ); private: int elements; int top; }; // Linked List struct Node { int info; int *next; }; class linkedStack // Array Stack { public: linkedStack(); void pushlink( int elementToPush ); bool poplink( int poppedElement ); bool peeklink (int topElement ); bool isEmptylink( ) const; void makeEmptylink( ); private: int *front; int *back; int header; }; implementation file //Machine.cpp #include <iostream> #include "Machines2.h" bottleStack::bottleStack( ):elements(2),top(-1) { } void bottleStack::push( int pushelement ) { elements = pushelement; top++; } bool bottleStack::pop( int poppedElement ) { if (top == -1) return false; for (int x = 0; x < elements; x++) { poppedElement = elements[top]; // stores popped element into variable top--; } } bool bottleStack::peek( int topElement ) { if (top == -1) return false; topElement = elements[top]; return true; } void bottleStack::makeEmpty() { top = -1; } bool bottleStack::isEmpty() const { if (top == -1) return true; return false; } //LinkedList Stack linkedStack::linkedstack( int front ) { front = back = &header; } void linkedStack::pushlink( int elementToPush ); { Node<DataType>*ptr = new Node<DataType>; ptr->info = elementToPush; front->next=ptr; front = ptr; } template <class DataType> bool linkedlistStack<DataType>::poplink( DataType & poppedElement ) { if (front == back) return false; // returns false if list is empty Node<DataType>*ptr=front->next; poppedElement = ptr->info; // stores popped element into variable front->next=ptr->next; if (back == ptr) back = front; delete ptr; return true; } template <class DataType> bool peeklink( DataType & topElement ); { if (front == back) return false; topElement = front->next->info; return true; } template <class DataType> bool isEmptylink( ) const; // returns true if list is empty otherwise, returns false { if (front == back) return true; return false; } template <class DataType> void makeEmptylink( ) { DataType temp; while(poplink(temp)); } and this is a brief summary of what the program is supposed to do You are to implement a program that uses both a Array-based Stack and a Linked Stack (the choice of how to use them is up to you!). The program will mimic two typical soda machines – one that holds cans and one that holds bottles. As you know, sodas are put into the machine in LIFO order. Your program will stock the machines with sodas, and then allow the user to purchase sodas and/or retrieve information about the machines, using the appropriate Stack functions. Nov 25, 2013 at 9:30pm UTC Some of the errors: 1 2 machine.cpp: In member function `bool bottleStack::pop(int)': machine.cpp:24: invalid types `int[int]' for array subscript elements is an int, not an array, so you cannot subscript an int 1 2 3 4 linkedStack::linkedstack( int front ) { front = back = &header; } You did not define a constructor that takes an int. Also, which front are you modifying, the one from the input, or the member of the class On line 60 void linkedStack::pushlink( int elementToPush ); - no semicolon at the end Try to read the error messages, in order, since some errors might be caused by something else Last edited on Nov 25, 2013 at 9:31pm UTC Nov 25, 2013 at 9:36pm UTC Line 20 and 21 of implementation file has no bracket for if statement 1 2 if (top == -1) return false; The same error in the implementation file with lines 31,32 46,47 72,73 77,78 86.87 94,95 Nov 25, 2013 at 11:26pm UTC hey guys thanks for the help. I tried to change what you guys suggested and then I just attempted to take the program in another direction and got these errors: program3.cpp: In function `int main()': program3.cpp:28: template-argument `main()::soda' uses local type `main()::soda ' program3.cpp:28: ISO C++ forbids declaration of `bottle' with no type program3.cpp:30: template-argument `main()::soda' uses local type `main()::soda ' program3.cpp:30: ISO C++ forbids declaration of `Can' with no type program3.cpp:62: request for member `push' in `bottle', which is of non-aggregate type `int' program3.cpp:74: request for member `pop' in `bottle', which is of non-aggregate type `int' program3.cpp:77: request for member `poplink' in `Can', which is of non-aggregate type `int' program3.cpp:85: parse error before `return' program3.cpp:33: confused by earlier errors, bailing out machine.cpp:5: syntax error before `::' token machine.cpp:5: ISO C++ forbids declaration of `Stack' with no type machine.cpp: In function `int Stack()': machine.cpp:6: only constructors take base initializers machine.cpp:6: confused by earlier errors, bailing out header file: #include "machine.cpp" #include <iostream> #include<string> using namespace std; template <class DataType> class bottleArrayStack // Array Stack { public: Stack( ); void push( DataType elementToPush ); // removes an element from the top of the stack and returns it in poppedElement; // returns false if called on an empty stack; otherwise, returns true bool pop( DataType & poppedElement ); // returns the element at the top of the stack in topElement without removing it // returns false if called on an empty stack; otherwise, returns true bool peek( DataType & topElement ); bool isEmpty( ) const; // returns true if the stack is empty; // otherwise, returns false void makeEmpty( ) private: Array<DataType> elements; int top; }; // Linked List template<class DataType> struct Node { DataType info; Node<DataType>*next; }; template <class DataType> class linkedlistStack // Array Stack { public: Stacklink( ); void pushlink( DataType elementToPush ); // removes an element from the top of the stack and returns it in poppedElement; // returns false if called on an empty stack; otherwise, returns true bool poplink( DataType & poppedElement ); // returns the element at the top of the stack in topElement without removing it // returns false if called on an empty stack; otherwise, returns true bool peeklink( DataType & topElement ); bool isEmptylink( ) const; // returns true if the stack is empty; // otherwise, returns false void makeEmptylink( ) private: Node<DataType>*front; Node<DataType>*back; Node<DataType>header; }; implementation file: #include "Machines.h" #include<iostream> //Machine.cpp template <class DataType> bottleArrayStack<DataType>::Stack( ):elements(2),top(-1) { } template <class DataType> void bottleArrayStack<DataType>::push( DataType elementToPush ) { if(++top== elements.length()) // checks to see if array is full elements.changeSize(elements.length()<<1); // doubles array if it's full elements[top] = elementToPush; } template <class DataType> bool bottleArrayStack<DataType>::pop( DataType & poppedElement ) { if (top == -1) {return false;} // returns false if array is empty poppedElement = elements[top]; // stores popped element into variable top--; } template <class DataType> bool bottleArrayStack<DataType>::peek( DataType & topElement ) { if (top == -1) { return false;} topElement = elements[top]; return true; } template <class DataType> void bottleArrayStack<DataType>::makeEmpty() { top = -1; } template <class DataType> bool bottleArrayStack<DataType>::isEmpty() const { if (top == -1) { return true;} return false; } //LinkedList Stack template <class DataType> linkedlistStack<DataType>::Stacklink( ) { front = back = &header; } template <class DataType> void linkedlistStack<DataType>::pushlink( DataType elementToPush ) { Node<DataType>*ptr = new Node<DataType>; ptr->info = elementToPush; front->next=ptr; front = ptr; } template <class DataType> bool linkedlistStack<DataType>::poplink( DataType & poppedElement ) { if (front == back) { return false; } // returns false if list is empty Node<DataType>*ptr=front->next; poppedElement = ptr->info; // stores popped element into variable front->next=ptr->next; if (back == ptr) back = front; delete ptr; return true; } template <class DataType> bool linkedlistStack<DataType>::peeklink( DataType & topElement ) { if (front == back) { return false;} topElement = front->next->info; return true; } template <class DataType> bool linkedlistStack<DataType>::isEmptylink( ) const // returns true if list is empty otherwise, returns false { if (front == back) { return true;} return false; } template <class DataType> void linkedlistStack<DataType>::makeEmptylink( ) { DataType temp; while(poplink(temp)); } main program: #include <iostream> #include <string> #include <sstream> #include "Machines.h" using namespace std; int main() { int x; char type; int selection; struct soda { string name; char type; float price; } drink; bottleStack<soda>bottle; linkedStack<soda>Can; do { cout<<"\n Menu (order 1-4, 5 exit)"<<"\n"; cout<<"\n 0thers: Exit"; cout<<"\n 1: Add another soda to the machine "; cout<<"\n 2: Buy a soda "; cout<<"\n 3: Find a soda by name "; cout<<"\n 4: How much is the soda "; cout<<"\n 5: QUIT"; cout<<"\n\n Enter a choice : "; cin >> selection; if (selection = 1) { cout << "What is the name of the soda you would like to add?"<< endl; cin >> drink.name; cout << "Is the soda a Can or a Bottle?"<< endl; cin >> drink.type; cout << "How much is the soda? (Value between $0.0 and $1.25)" << endl; cin >> drink.price; bottle.push(drink); } if (selection = 2) { cout << "Would you like a bottle(B) or Can(C)?"<< endl; cin >> type; if (type = 'B' || 'b') bottle.pop(drink); else Can.poplink(drink); } if (selection = 5) cout << "quit" << endl; } while(selection != 5) return 0; } Topic archived. No new replies allowed.
yann458 Probléme avec l'utilitaire Motion Bonjour, L'utilitaire Motion fonctionne bien avec ma webcam. Mais J'ai changer de webcam en passant à la marque Logitech c270 (edition limité) , l'utilitaire gucsview , à l'air de bin fonctionner. Par contre avec l'utilitaire motion , ça ne marche pas et aucune mise à jour de cet utilitaire. L'erreur cité : [1] Using palette MJPG (544x288) bytesperlines 0 sizeimage 170666 colorspace 00000008 [1] VIDIOC_G_JPEGCOMP not supported but it should be (does your webcam driver support this ioctl?) je peux répondre qu'à partir de 21h00. Merci de votre aide. :~$ motion -n motion.conf [0] Processing thread 0 - config file /home/eimayann/motion.conf [0] Motion 3.2.11 Started [0] ffmpeg LIBAVCODEC_BUILD 3412993 LIBAVFORMAT_BUILD 3415808 [0] Thread 1 is from /home/eimayann/motion.conf [1] Thread 1 started [0] motion-httpd/3.2.11 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8081 [1] cap.driver: "uvcvideo" [1] cap.card: "UVC Camera (046d:0825)" [1] cap.bus_info: "usb-0000:00:02.1-10" [1] cap.capabilities=0x04000001 [1] - VIDEO_CAPTURE [1] - STREAMING [1] Supported palettes: [1] 0: YUYV (YUV 4:2:2 (YUYV)) [1] 1: MJPG (MJPEG) [1] VIDIOC_TRY_FMT failed for format MJPG: [1] ioctl(VIDIOCGMBUF) - Error device does not support memory map [1] V4L capturing using read is deprecated! [1] Motion only supports mmap. [1] Could not fetch initial image from camera [1] Motion continues using width and height from config file(s) [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 8082 [1] Retrying until successful connection with camera [1] cap.driver: "uvcvideo" [1] cap.card: "UVC Camera (046d:0825)" [1] cap.bus_info: "usb-0000:00:02.1-10" [1] cap.capabilities=0x04000001 [1] - VIDEO_CAPTURE [1] - STREAMING [1] Supported palettes: [1] 0: YUYV (YUV 4:2:2 (YUYV)) [1] 1: MJPG (MJPEG) [1] index_format 2 Test palette MJPG (800x288) [1] Adjusting resolution from 800x288 to 544x288. [1] Using palette MJPG (544x288) bytesperlines 0 sizeimage 170666 colorspace 00000008 [1] VIDIOC_G_JPEGCOMP not supported but it should be (does your webcam driver support this ioctl?) [1] found control 0x00980900, "Brightness", range 0,255 [1] "Brightness", default 128, current 128 [1] found control 0x00980901, "Contrast", range 0,255 [1] "Contrast", default 32, current 32 [1] found control 0x00980902, "Saturation", range 0,255 [1] "Saturation", default 32, current 32 [1] found control 0x00980913, "Gain", range 0,255 [1] "Gain", default 0, current 0 [1] mmap information: [1] frames=4 [1] 0 length=170666 [1] 1 length=170666 [1] 2 length=170666 [1] 3 length=170666 [1] Using V4L2 [1] Camera has finally become available [1] Camera image has different width and height from what is in the config file. You should fix that [1] Restarting Motion thread to reinitialize all image buffers to new picture dimensions [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] Closing video device /dev/video0 [0] Motion thread 1 restart [1] Thread 1 started [1] cap.driver: "uvcvideo" [1] cap.card: "UVC Camera (046d:0825)" [1] cap.bus_info: "usb-0000:00:02.1-10" [1] cap.capabilities=0x04000001 [1] - VIDEO_CAPTURE [1] - STREAMING [1] Supported palettes: [1] 0: YUYV (YUV 4:2:2 (YUYV)) [1] 1: MJPG (MJPEG) [1] index_format 2 Test palette MJPG (544x288) [1] Using palette MJPG (544x288) bytesperlines 0 sizeimage 170666 colorspace 00000008 [1] VIDIOC_G_JPEGCOMP not supported but it should be (does your webcam driver support this ioctl?) [1] found control 0x00980900, "Brightness", range 0,255 [1] "Brightness", default 128, current 128 [1] found control 0x00980901, "Contrast", range 0,255 [1] "Contrast", default 32, current 32 [1] found control 0x00980902, "Saturation", range 0,255 [1] "Saturation", default 32, current 32 [1] found control 0x00980913, "Gain", range 0,255 [1] "Gain", default 0, current 0 [1] mmap information: [1] frames=4 [1] 0 length=170666 [1] 1 length=170666 [1] 2 length=170666 [1] 3 length=170666 [1] Using V4L2 [1] Resizing pre_capture buffer to 1 items Corrupt JPEG data: 1 extraneous bytes before marker 0xd2 [1] mjpegtoyuv420p: Corrupt image ... continue Corrupt JPEG data: 1 extraneous bytes before marker 0xd2 [1] mjpegtoyuv420p: Corrupt image ... continue Corrupt JPEG data: 1 extraneous bytes before marker 0xd2 [1] mjpegtoyuv420p: Corrupt image ... continue Corrupt JPEG data: 1 extraneous bytes before marker 0xd5 [1] mjpegtoyuv420p: Corrupt image ... continue Corrupt JPEG data: 1 extraneous bytes before marker 0xd3 [1] mjpegtoyuv420p: Corrupt image ... continue [0] httpd quit [0] httpd - Read from client [0] httpd Closing [0] httpd thread exit [1] Error capturing first image [1] Started stream webcam server in port 8082 Corrupt JPEG data: 1 extraneous bytes before marker 0xd2 [1] mjpegtoyuv420p: Corrupt image ... continue [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] Closing video device /dev/video0 [0] Motion terminating Hors ligne
= Introduction = Welcome, GNU Radio beginners. If you are reading this tutorial, you probably already have some very basic knowledge about how GNU Radio works, what it is and what it can do - and now you want to enter this exciting world of Open Source digital signal processing (DSP) yourself. This is a tutorial on how to write applications for GNU Radio in Python. It is no introduction to programming, software radio or signal processing, nor does it cover how to extend GNU Radio by creating new blocks or adding code to the source tree. If you have some background in the mentioned topics and are starting to work with GNU Radio, this probably is the correct tutorial for you. If you don't know what a Software Radio is or what a FIR filter does, you should probably go a few steps back and get a more solid background on signal processing theory. But don't let this discourage you - the best way to learn something is by trying it out. Although this tutorial is designed to make your introduction to GNU Radio as easy as possible, it is not a definitive guide. In fact, I might sometimes simply not tell the real truth to make explanations easier. I might even contradict myself in later chapters. Usage of brain power is still necessary to develop GNU Radio applications. = Preliminaries = Before you get started with this tutorial, make sure your GNU Radio installation is ready and working. You don't necessarily need a USRP, but some kind of source and sink (USRP or audio) is helpful, although not strictly required. If the GNU Radio examples work (such as dial_tone.py in gnuradio-examples/python/audio), you're ready to go. You should also have some background in programming - but don't worry if you've never programmed Python, it is a very easy language to learn. = Understanding flow graphs = Before we start banging out code, first we need to understand the most basic concepts about GNU Radio: flow graphs (as in graph theory) and blocks. Many GNU Radio applications contain nothing other than a flow graph. The nodes of such a graph are called blocks, and the data flows along the edges. Any actual signal processing is done in the blocks. Ideally, every block does exactly one job - this way GNU Radio stays modular and flexible. Blocks are written in C++; writing new blocks is not very difficult (but explained elsewhere). The data passing between blocks can be of any kind - practically any type of data you can define in C++ is possible. In practice, the most common data types are complex and real short or long integers and floating point values as most of the time, data passing from one block to the next will either be samples or bits. Examples In order to illuminate this diffuse topic a little, let's start with some examples: Low-pass filtered audio recorder:{{{ --- --- --------------| Mic - LPF - Record to file | --- --- -------------- }}} First, an audio signal from a microphone is recorded by your PCs sound card and converted into a digital signal. The samples are streamed to the next block, the low pass filter (LPF), which could be implemented as an FIR filter. The filtered signal is passed on to the final block, which records the filtered audio signal into a file. This is a simple, yet complete flow graph. The first and last block serve a special purpose: they operate as source and sink. Every flow graph needs at least one source and sink to be able to function. Dial tone generator:{{{ ----------------------| Sine generator (350Hz) -- ----------------------- | ---------- --- || Audio sink | -- | ----------------------- | ----------| Sine generator (440Hz) -- ----------------------- }}} This simple example is often called the "Hello World of GNU Radio". Other than the first example, it has two sources. The sink, on the other hand, has two inputs - in this case for the left and right channel of the sound card. Code for this example is available at gnuradio-examples/python/audio/dial_tone.py. QPSK Demodulator:{{{ ----------- -------------- ----------------| USRP Source - Frequency sync - Matched filter | ----------- -------------- ---------------+| COMPLEX SAMPLES -----------------+| Symbol demodulator | -----------------+| COMPLEX SYMBOLS --------------- --------------- ----------+| Source decoder - Channel decoder - Bit mapping | --------------+ --------------- -----------| BITS --------------+| Application | DATA --------------- }}} This example is a bit more sophisticated, but should look quite familiar to RF engineers. In this case, the source is a USRP which is connected to an antenna. This kind of source sends complex samples to the following blocks. The interesting part about this kind of flow graph is that the data types change during the flow graph: at first, complex baseband samples are passed along. Then, complex symbols are gathered from the signal. Next, these symbols are turned into bits which again are processed further. Finally, the decoded bits are passed to some application which makes use of the data. Walkie Talkie:{{{ ------------ ---------------- ------- ----------| USRP Source - NBFM Demodulator - Squelch - Audio Sink | ------------- ---------------- ------- ---------- ------------ -------------- ----------| Audio Source -------- NBFM Modulator ------- USRP Sink | ------------ -------------- ---------- }}} This applications consists of two separate flow graphs, both running in parallel. One of them deals with the Tx path, the other with the Rx path. This kind of application would require some extra code (outside the flow graphs) to mute one path while the other is active. Both flow graphs still require at least one source and sink, each. You can find a GNU Radio application that does this (only a bit more sophisticated) at gnuradio-examples/python/usrp/usrp_nbfm_ptt.py. Summary This concludes the chapter about flow graphs. Here's a quick summary about the most vital points you really need to know: All signal processing in GNU Radio is done through flow graphs. A flow graph consists of blocks. A block does one signal processing operation, such as filtering, adding signals, transforming, decoding, hardware access or many others. Data passes between blocks in various formats, complex or real integers, floats or basically any kind of data type you can define. Every flow graph needs at least one sink and source. = A FIRST WORKING CODE EXAMPLE = Next step is to find out how to write those flow graphs in real Python. Let's start by examining some code line-by-line. If you are familiar with Python, you can probably skip some of the explanations, but don't rush to the next section yet - the explanations are both for Python and GNU Radio beginners. The following code example represents the flow graph from example 2. It is actually a slightly modified version of the code example you can find in gnuradio-examples/python/audio/dial_tone.py. {{{ #!python #!/usr/bin/env python from gnuradio import gr from gnuradio import audio class my_top_block(gr.top_block): def init(self): gr.top_block.__init__(self) sample_rate = 32000 ampl = 0.1 src0 = gr.sig_source_f (sample_rate, gr.GR_SIN_WAVE, 350, ampl) src1 = gr.sig_source_f (sample_rate, gr.GR_SIN_WAVE, 440, ampl) dst = audio.sink (sample_rate, "") self.connect (src0, (dst, 0)) self.connect (src1, (dst, 1)) if name '__main__': try: my_top_block().run() except KeyboardInterrupt: pass }}} The first line should look familiar to anyone with some Unix or Linux background: It tells the shell that this file is a Python file and to use the Python interpreter to run this file. You need this line if you want to run this file directly from the command line. Lines 3 and 4 import necessary Python modules to run GNU Radio. The `import` command is similar to the `#include` directive in C/C++. Here, two modules from the gnuradio-package are imported: `gr` and `audio`. The first module, `gr`, is the basic GNU Radio module. You will always have to import this to run a GNU Radio application. The second loads audio device blocks. There are many GNU Radio modules, a short list of modules will be presented later on. Lines 6-17 define a class called `my_top_block` which is derived from another class, `gr.top_block`. This class is basically a container for the flow graph. By deriving from `gr.top_block`, you get all the hooks and functions you need to add blocks and connect them. Only one member function is defined for this class: the function `__init__()`, which is the constructor of this class. In the first line of this function (line 8), the parent constructor is called (in Python, this needs to be done explicitly. Most things in Python need to be done explicitly; in fact, this is one main Python principle). Next, two variables are defined: `sample_rate` and `ampl`. These will control sampling rate and amplitude of the signal generators. Before explaining the next lines, have another look at the sketched flow graph chart in the previous section: it consists of three blocks and two edges. The blocks are defined in lines 13-15: Two signal sources are generated (called `src0` and `src1`). These sources continuously create sine waves at given frequencies (350 and 440Hz) and a given sampling rate (here 32kHz). The amplitude is controlled by the ampl variable and set to 0.1. The prefix "f" of the block type `gr.sig_source_f` indicates the output is of type `float`, which is a good thing because the audio sink accepts floating point samples in the range between -1 and +1. These kind of things must be taken care of by the programmer: although GNU Radio does some checks to make sure the connections make sense, there is still some things that must be taken care of manually. For example, if you wanted to feed integer samples to `audio.sink`, GNU Radio would throw an error - but if you would set the amplitude in the above example to anything larger than 1, you would get a distorted signal without receiving an error. The signal sink is defined in line 15: `audio.sink()` returns a block which acts as a soundcard control and plays back any samples piped into it. As in the blocks beforehand, the sampling rate needs to be set explicitly, even though this was set already for the signal sources. GNU Radio cannot guess the correct sampling rate from the context, as it is not part of the information flow between blocks. Lines 16 and 17 connect the blocks. The general syntax for connecting blocks is `self.connect(block1, block2, block3, ...)` which would connect the output of `block1` with the input of `block2`, the output of `block2` with the input of `block3` and so on. You can connect as many blocks as you wish with one `connect()` call. Here, a special syntax is necessary because we want to connect `src0` with the first input of `dst` and `src1` with the second one. `self.connect (src0, (dst, 0))` does exactly this: it specifically connects `src0` to port 0 of `dst`. `(dst, 0)` is called a "tuple" in Python jargon. In the `self.connect()` call it is used to specify the port number. When the port number is zero, the block may be used alone. An equivalent command to the one in line 16 would thus have been {{{ #!python self.connect((src0, 0), (dst, 0)) }}} That's all there is to create a flow graph. The last 5 lines do nothing but start the flow graph (line 22). The `try` and `except` statements simply make sure the flow graph (which would otherwise run infinitely) are stopped when Ctrl+C is pressed (which triggers a `KeyboardInterrupt` Python exception). For Python-beginners, two more remarks should not be left out: As you might have noticed, the class my_top_block is run without creating an instance beforehand. In Python, this is a quite common thing to do, especially if you have a class which would only get one instance anyway. However, you could just as well create one or more instances of the class and then call the `run()` method on the instance(es). Second, the indenting is part of the code and not, like in C++, simply for the programmers convenience. If you try and modify this code, make sure you don't start mixing tabs and spaces. Every level must be consistently indented. If you want to go on with this tutorial, you should first get a more solid Python background. Python documentation can be found at the Python web site http://www.python.org/, or a library of your choice. A good place to start for people with prior programming experience is http://wiki.python.org/moin/BeginnersGuide/Programmers . You need to import required GNU Radio modules with the `from gnuradio import` command. You always need the module `gr`. A flow graph is contained in a class which itself is derived from `gr.top_block`. Blocks are created by calling functions such as `gr.sig_source_f()` and saving the return value to a variable. Blocks are connected by calling `self.connect()` from within the flow graph class If you don't feel comfortable writing some basic Python code now, have a break and go through some Python tutorials. The next section will give a more detailed overview about writing GNU Radio applications in Python. = Coding Python GNU Radio Applications = The example above already covers quite a lot of how to write Python GNU Radio applications. This chapter and the next will try to show the possibilites of GNU Radio applications and how to use them. From now on, there is no need to linearly read these chapters section-for-section, it probably makes more sense to go over the titles and find out what you want to know. GNU Radio Modules GNU Radio comes with quite a lot of libraries and modules. You will usually include modules with the following syntax: {{{ #!python from gnuradio import MODULENAME }}} Some modules work a bit differently, see the following list on the most common modules. gr The main GNU Radio library. You will nearly always need this. usrp USRP sources and sinks and controls. audio Soundcard controls (sources, sinks). You can use this to send or receive audio to the sound cards, but you can also use your sound card as a narrow band receiver with an external RF frontend. blks2 This module contains additional blocks written in Python which include often-used tasks like modulators and demodulators, some extra filter code, resamplers, squelch and so on. optfir Routines for designing optimal FIR filters. wxgui This is actually a submodule, containing utilities to quickly create graphical user interfaces to your flow graphs. Use `from gnuradio.wxgui import *` to import everything in the submodule or `from gnuradio.wxgui import stdgui2, fftsink2` to import specific components. See the section 'Graphical User Interfaces' for more information. eng_notation Adds some functions to deals with numbers in engineering notation such as `100M' for 100 * 10^6'. eng_options Use `from gnuradio.eng_options import eng_options` to import this feature. This module extends Pythons `optparse` module to understand engineering notation (see above). gru Miscellaneous utilities, mathematical and others. This is by far not a complete list, nor are the descriptions of the modules very useful by themselves. GNU Radio code changes a lot, so creating a static documentation would not be very sensible. Instead, you will have to use the good old Star Wars motto to delve further into the details of the modules: "Use the source!". If you feel GNU Radio should really already have some functionality you want to use, either browse through the module directory Python uses or go through the source directory of GNU Radio. In particular, pay attention to the directories starting with `gr-` in the source directory, such as gr-sounder or gr-radar-mono. These produce their own code and, consequently, their own modules. Of course, Python itself comes with a lot of modules, some of which are extremely useful - if not necessary - to write GNU Radio applications. Check the Python documentation and the SciPy website for more information. Choosing, defining and configuring blocks GNU Radio comes with an abundance of pre-defined blocks, so for beginners, it is often quite confusing to find the correct blocks for their applications and set them up correctly. As with modules, the GNU Radio code changes around quite a bit, so a static documentation doesn't really make sense. However, the situation is a bit different with blocks. First of all, there's the unofficial GNU Radio manual which can be downloaded at Thanks to the author for the massive effort! Then, there's the automatically generated documentation. This is created by Doxygen from the source code. I recommend making these docs the same time as building the rest. Run `./configure` with the `--enable-doxygen` switch for to create the docs when running `make`. The latter is particularly useful as it consists of multiple HTML documents which can be browsed easily. If you don't want to create the documents yourself or don't have to possibility to do so, you can find a version (although not an up-to-date one) on the web at http://gnuradio.org/doc/doxygen/hierarchy.html. The auto-generated docs are in your source directory beneath gnuradio-core/doc/html. Learning how to use these documentations is a major part of learning how to use GNU Radio! Let's get practical. Here's the three lines from the previous example which define the blocks: {{{ #!python src0 = gr.sig_source_f (sample_rate, gr.GR_SIN_WAVE, 350, ampl) src1 = gr.sig_source_f (sample_rate, gr.GR_SIN_WAVE, 440, ampl) dst = audio.sink (sample_rate, "") }}} Here's a simplified version of what happens when this code is executed: First, a function called `sig_source_f` in the module `gr` is executed. It receives four function arguments: sample_rate, which is a Python variable, gr.GR_SIN_WAVE, which is a constant defined in the `gr' module, 350, a normal literal constant, ampl, another variable. This function creates a class which is subsequently assigned to `src0`. The same happens on the other two lines, although the sink is fetched from a different module (`audio`). So how did I know which block to use and what to pass to `gr.sig_source_f()`? This is where the documentation comes in. If you use the Doxygen-generated docs, click on the top left tab called "Modules". Proceed to "Signal Sources". You will find a list of signal generators, including the `sig_source_*` family. The suffix defines the data type at the output: f = float c = complex float i = int s = short int b = bits (actually an integer type) These suffixes are used for all types of blocks, e.g. `gr.fir_filter_ccf()` will define an FIR filter with complex input, complex output and float taps, and `gr.add_const_ss()` will define a block which adds incoming short values with another, constant, short int. A list of all classes with a short description can be obtained by clicking on the top tab called "Classes". The unofficial GNU Radio manual lists all classes sorted by module and can be searched using your PDF reader of choice. Most blocks you'll be using at the beginning are either from the `gr`, `audio` or `usrp` modules. So if you find a class called `gr_sig_source_f` in the auto-generated docs, you can create this class in Python by calling `gr.sig_source_f()`. At this point it is worth having a closer look behind the curtains of GNU Radio. The reason you can easily use the blocks - written in C++ - in your Python code is because GNU Radio uses a tool called SWIG to create an interface between Python and C++. Every block in C++ comes with a creating function, called `gr_make_***` (`gr_make_sig_source_f()` in the example mentioned above). This function is always documented on the same page as the matching class, and this function is what gets exported to Python, so `gr.sig_source_f()` in Python calls `gr_make_sig_source_f()` in C++. For the same reason, it takes the same arguments - that's how you know how to initialise a block in Python. Once you're browsing the Doxygen documentation of the class `gr_sig_source_f`, you might notice many other class methods, such as `set_frequency()`. These functions get exported to Python as well. So if you have created a signal source and want to change the frequency (say your application has a user frequency control) you can use this method on your Python defined block: {{{ #!python# We're in some cool application here Other, fantastic things happen here src0.set_frequency(880) # Change frequency }}} will change the frequency of the first signal generator to 880Hz. Hopefully, GNU Radio documentation will grow and become more and more complete. But to completely understand the workings of blocks in detail, you will probably have to have a look at the code sooner or later, no matter how good the documentation gets. Connecting blocks Use the `connect()` method of `gr.top_block` to connect blocks. Some things are worth mentioning: You can only connect inputs and outputs if the data types match. If you try to connect a float output with a complex input, you will get an error. One output can be connected to several inputs; you don't need an extra block to duplicate signal paths. Sometimes it makes sense to combine several blocks into a new block. Say you have several applications which all have a common signal processing component which consists of several blocks. These blocks can be combined into a new block, which in turn can be used in your applications is if it were a normal GNU Radio block. Example: Say you have two different flow graphs, FG1 and FG2. Both use - among others - the blocks B1 and B2. You want to combine them to a hierarchical block called HierBlock: {{{ -----------------| --- -- | --+--+ B1 - B2 ----| --- -- || HierBlock | ----------------- }}} This is what you do: create a flow graph which derives from `gr.hier_block2` and use `self` as source and sink: {{{ #!python class HierBlock(gr.hier_block2): def init(self, audio_rate, if_rate): gr.hier_block2.__init__(self, "HierBlock", gr.io_signature(1, 1, gr.sizeof_float), gr.io_signature(1, 2, gr.sizeof_gr_complex)) B1 = gr.block1(...) # Put in proper code here! B2 = gr.block2(...) self.connect(self, B1, B2, self) }}} As you can see, creating a hierarchical block is very similar to creating a flow graph with `gr.top_block`. Apart from using `self` as source and sink, there is another difference: the constructor for the parent class (called in line 3) needs to receive additional information. The call to `gr.hier_block2.__init__()` takes four parameters: self (which is always passed to the constructor as first argument), a string with an identifier for the hierarchical block (change at your convenience), an input signature and an output signature. The last two require some extra explanation unless you have already written your own blocks in C++. GNU Radio needs to know what types of input and output the block uses. Creating an input/output signature can be done by calling `gr.io_signature()`, as is done here. This function call takes 3 arguments: minimum number of ports, maximum number of ports and size of the input/output elements. For the hierarchical block `HierBlock`, you can see that it has exactly one input and one or two outputs. The incoming objects are of size `float`, so the block processes incoming real float values. Somewhere in B1 or B2, the data is converted to complex float values, so the output signature declares outgoing objects to be of size `gr.sizeof_gr_complex`. `gr.sizeof_float` and `gr.sizeof_gr_complex` are equivalent to the C++ return values of the `sizeof()` call. Other predefined constants are gr.sizeof_char, gr.sizeof_in and gr.sizeof_short. Use gr.io_signature(0, 0, 0) to create a null IO signature, i.e. for defining hierarchical blocks as sources or sinks. That's all. You can now use `HierBlock` as you would use a regular block. For example, you could put this code in the same file: {{{ #!python class FG1: def init(self): gr.top_block.__init__(self) ... # Sources and other blocks are defined here other_block1 = gr.other_block() hierblock = HierBlock() other_block2 = gr.other_block() self.connect(other_block1, hierblock, other_block2) ... # Define rest of FG1 }}} Of course, to make use of Pythons modularity, you could also put the code for `HierBlock` in an extra file called `hier_block.py`. To use this block from another file, simply add an import directive to your code: {{{ #!python from hier_block import HierBlock }}} and you can use `HierBlock` as mentioned above. Examples for hierarchical blocks: {{{ gnuradio-examples/python/usrp/fm_tx4.py gnuradio-examples/python/usrp/fm_tx_2_daughterboards.py gnuradio-examples/python/digital/tx_voice.py }}} In some cases, you might want to have completely separate flow graphs, e.g. for receive and transmit paths (see the example 'Walkie-Talkie' above). Currently (June 2008), it is not possible to have multiple top_blocks running at the same time, but what you can do is create full flow graphs as hierarchical blocks using `gr.hier_block2` like in the section above. Then, create a top_block to hold the flow graphs. Example: {{{ #!python class transmit_path(gr.hier_block2): def init(self): gr.hier_block2.__init__(self, "transmit_path", gr.io_signature(0, 0, 0), # Null signature gr.io_signature(0, 0, 0)) source_block = gr.source() signal_proc = gr.other_block() sink_block = gr.sink() self.connect(source_block, signal_proc, sink_block) class receive_path(gr.hier_block2): def init(self): gr.hier_block2.__init__(self, "receive_path", gr.io_signature(0, 0, 0), # Null signature gr.io_signature(0, 0, 0)) source_block = gr.source() signal_proc = gr.other_block() sink_block = gr.sink() self.connect(source_block, signal_proc, sink_block) class my_top_block(gr.top_block): def init(self): gr.top_block.__init__(self) tx_path = transmit_path() rx_path = receive_path() self.connect(tx_path) self.connect(rx_path) }}} Now, when you start `my_top_block`, both flow graphs are started in parallel. Note that the hierarchical blocks have explicitly no inputs and outputs defined, they have a null IO signature. Consequently, they don't connect to `self` as source or sink; they rather define their own sources or sink (just as you would do when defining a hierarchical block as source or sink). The top block simply connects the hierarchical blocks to itself, but does not connect them up in any way. Examples for multiple flow graphs: {{{ gnuradio-examples/python/usrp/usrp_nbfm_ptt.py }}} GNU Radio is more than blocks and flow graphs - it comes with a lot of tools and code to help you write DSP applications. A collection of useful GNU Radio applications designed to aid you is in gr-utils/. Browse the source code in gnuradio-core/src/python/gnuradio to find utilities you can use in your Python code such as filter design code, modulation utilities and more. If you have followed the tutorial so far, you will have noticed that a flow graph has always been implemented as a class, derived from `gr.top_block`. The question remains on how to control one of these classes. As mentioned before, deriving the class from gr.top_block brings along all the functionality you might need. To run or stop an existing flow graph, use the following methods: `run()` The simplest way to run a flow graph. Calls start(), then wait(). Used to run a flow graph that will stop on its own, or to run a flow graph indefinitely until SIGINT is received. `start()` Start the contained flow graph. Returns to the caller once the threads are created. `stop()` Stop the running flow graph. Notifies each thread created by the scheduler to shutdown, then returns to caller. `wait()` Wait for a flow graph to complete. Flowgraphs complete when either (1) all blocks indicate that they are done, or (2) after stop has been called to request shutdown. `lock()` Lock a flow graph in preparation for reconfiguration. `unlock()` Unlock a flow graph in preparation for reconfiguration. When an equal number of calls to lock() and unlock() have occurred, the flow graph will be restarted automatically. `is_running()` Returns true if flow graph is running. Example: {{{ #!python class my_top_block(gr.top_block): def init(self): gr.top_block.__init__(self) ... # Define blocks etc. here if name == '__main__': my_top_block().start() sleep(5) # Wait 5 secs (assuming sleep was imported!) my_top_block().stop() sleep(1) # Give the threads time to really shut down print my_top_block().is_running() # Should return False }}} These methods help you to control the flow graph from the outside. For many problems this might not be enough: you don't simply want to start or stop a flow graph, you want to reconfigure the way it behaves. For example, imagine your application has a volume control somewhere in your flow graph. This volume control is implemented by inserting a multiplier into the sample stream. This multiplier is of type `gr.multiply_const_ff`. If you check the documentation for this kind of of block, you will find a function `gr.multiply_const_ff.set_k()` which sets the multiplication factor. You need to make the settings visible to the outside in order to control it. The simplest way is to make the block an attribute of the flow graph class. Example: {{{ #!python class my_top_block(gr.top_block): def init(self): gr.top_block.__init__(self) ... # Define some blocks self.amp = gr.multiply_const_ff(1) # Define multiplier block ... # Define more blocks self.connect(..., self.amp, ...) # Connect all blocks def set_volume(self, volume): self.amp.set_k(volume) if name '__main__': my_top_block().start() sleep(2) # Wait 2 secs (assuming sleep was imported!) my_top_block.set_volume(2) # Pump up the volume (by factor 2) sleep(2) # Wait 2 secs (assuming sleep was imported!) my_top_block().stop() }}} This example runs the flow graph for 2 seconds and then doubles the volume by accessing the `amp` block through a member function called `set_volume()`. Of course, one could have accessed the `amp` attribute directly, omitting the member function. Hint: making blocks attributes of the flow graph is generally a good idea as it makes extending the flow graph with extra member functions easier. Non-flow graph centred applications == Up until now, GNU Radio applications in this tutorial have always been centred around the one class derived from gr.top_block. However, this is not necessarily how GNU Radio needs to be used. GNU Radio was designed to develop DSP applications from Python, so there's no reason to not use the full power of Python when using GNU Radio. Python is an extremely powerful language, and new libraries and functionalities are constantly being added. In a way, GNU Radio extends Python with a powerful, real-time-capable DSP library. By combining this with other libraries you have immense functionality right there at your fingertips. For example, by combining GNU Radio with SciPy, a collection of scientific Python libraries, you can record RF signals in real time and do extensive mathematical operations off line, save statistics to a database and so on - all in the same application. Even expensive engineering software such as Matlab might become unnecessary if you combine all these libraries. = Advanced Topics = If you have really read the previous sections, you already know enough to write your first Python GNU Radio applications. This section will adress some slightly more advanced functionalities for Python GNU Radio applications. Dynamic flow graph creation For most cases, the aforementioned way to define flow graphs is completely adequate. If you need more flexibility in your application, you might want to have even more control over the flow graph from outside the class. This can be achieved by taking the code out of the `__init__()` function and simply using `gr.top_block` as a container. Example: {{{ #!python ... # We are inside some application tb = gr.top_block() # Define the container block1 = gr.some_other_block() block2 = gr.yet_another_block() tb.connect(block1, block2) ... # The application does some wonderful things here tb.start() # Start the flow graph ... # Do some more incredible and fascinating stuff here }}} If you are writing some application which needs to dynamically stop a flow graph (reconfigure it, re-start it and so) on this might be a more practical way to do it. Examples for this kind of flow graph setup: {{{ gnuradio-examples/python/apps/hf_explorer/hfx2.py }}} Python has its own libraries to parse command line options. See the documentation for the module `optparse` to find out how to use it. GNU Radio extends optparse by new command line option types. Use `from gnuradio.eng_option import eng_option' to import this extension. With eng_option, you have the following types: eng_float Like the original float option, but also accepts engineering notation like 101.8M subdev Only accepts valid subdevice descriptors such as A:0 (To specify a daughterboard on a USRP) intx Only accepts integers If your application supports command line options, it would be ever so nice if you could stick to the GNU Radio conventions for command line options. You can find these (along with more hints for developers) in README.hacking. Graphical User Interfaces If you are a Python expert and also have some experience in writing GUIs for Python (using whatever GUI toolkit you like), you might not even need this section. As mentioned before, GNU Radio merely extends Python with DSP routines - so if you like, just go ahead and write a GUI application, add a GNU Radio flow graph to it and define some interfaces to carry GNU Radio information to your application and vice versa. If you want to plot your data, you could use Matplotlib or Qwt. However, sometimes you simply want to write a quick GUI application without bothering with setting up widgets, defining all the menus etc. GNU Radio comes with some predefined classes to help you write graphical GNU Radio applications. These modules are based on wxWidgets (or to be precise, wxPython), a platform-independent GUI toolkit. You will need some background in wxPython - but don't worry, it is not that complicated and there are several tutorials available on the net. Check the wxPython website for documentation (http://www.wxpython.org/). To use the GNU Radio wxWidgets tools, you need to import some modules: {{{ #!python from gnuradio.wxgui import stdgui2, fftsink2, slider, form }}} Here, 4 components were imported from the gnuradio.wxgui submodule. Here's a quick list of the modules (again, not necessarily complete. You will have to browse the modules or the source code in gr-wxgui/src/python). stdgui2 Basic GUI stuff, you always need this fftsink2 Plot FFTs of your data to create spectrum analyzers or whatever scopesink2 Oscilloscope output waterfallsink2 Waterfall output numbersink2 Displays numerical values of incoming data form Often used input forms (see below) Next, we have to define a new flow graph. This time, we don't derive from `gr.top_block` but from `stdgui2.std_top_block`: {{{ #!python class my_gui_flow graph(stdgui2.std_top_block): def init(self, frame, panel, vbox, argv): stdgui2.std_top_block.__init__ (self, frame, panel, vbox, argv) }}} As you can see, there's another difference: the constructor gets a couple of new parameters. This is because a `stdgui2.std_top_block` does not only include flow graph functionality (it is derived from gr.top_block itself), but also directly creates a window with some basic components (like a menu). This is good news for all of those who just want to quickly hack a graphical application: GNU Radio creates the window and everything, you just need to add the widgets. Here's a list of what you can do with these new objects (this probably won't mean much to you if you have no idea about GUI programming): frame The wx.Frame of your window. You can get at the predefined menu by using frame.GetMenuBar() panel A panel, placed in `frame', to hold all your wxControl widgets vbox A vertical box sizer (wx.BoxSizer(wx.VERTICAL) is how it is defined), used to align your widgets in the panel argv The command line arguments Now you have all you need to create your GUI. You can simply add new box sizers and widgets to vbox, change the menu or whatever. Some typical functions have been simplified further in the GNU Radio GUI library `form`. form has a great number of input widgets: `form.static_text_field()` for static text field (display only), `form.float_field()`, to input float values, `form.text_field()` to input text, `form.checkbox_field()` for checkboxes, `form.radiobox_field()` for radioboxes etc. Check the source code of gr-wxgui/src/python/form.py for the complete list. Most of these calls pass most of their arguments to the appropriate wxPython objects, so the function arguments are quite self-explanatory. See one of the examples mentioned below on how to add widgets using form. Probably the most useful part of `gnuradio.wxgui` is the possibility to directly plot incoming data. To do this, you need one of the sinks that come with `gnuradio.wxgui`, such as `fftsink2`. These sinks work just as any other GNU Radio sink, but also have properties needed for use with wxPython. Example: {{{ #!python from gnuradio.wxgui import stdgui2, fftsink2 App gets defined here ... FFT display (pseudo-spectrum analyzer) my_fft = fftsink2.fft_sink_f(panel, title="FFT of some Signal", fft_size=512, sample_rate=sample_rate, ref_level=0, y_per_div=20) self.connect(source_block, my_fft) vbox.Add(my_fft.win, 1, wx.EXPAND) }}} First, the block is defined (`fftsink2.fft_sink_f`). Apart from typical DSP parameters such as the sampling rate, it also needs the `panel` object which is passed to the constructor. Next, the block is connected to a source. Finally, the FFT window (`my_fft.win`) is placed inside the `vbox` BoxSizer to actually display it. Remember that a signal block output can be connected to any amount of inputs. Finally, the whole thing needs to be started. Because we need an `wx.App()` to run the GUI, the startup code is a bit different from a regular flow graph: {{{ #!python if name == '__main__': app = stdgui2.stdapp(my_gui_flow_graph, "GUI GNU Radio Application") app.MainLoop() }}} `stdgui2.stdapp()` creates the `wx.App` with `my_gui_flow_graph` (the first argument). The window title is set to "GUI GNU Radio Application". Examples for simple GNU Radio GUIs: {{{ gr-utils/src/python/usrp_fft.py gr-utils/src/python/usrp_oscope.py gnuradio-examples/python/audio/audio_fft.py gnuradio-examples/python/usrp/usrp_am_mw_rcv.py }}} And many more. = WHAT NEXT? = Young Padawan, no more there is I can teach you. If you have any more questions on how to write GNU Radio applications in Python, there are still a number of resources you can use: Use the source. Especially the examples in gnuradio-examples/ and gr-utils/ can be very helpful. Check the mailing list archives. Chances are very high your problem has been asked before.
I'd stick with a Python 2 book instead. Yes, a lot of changes applicable to Python 3 have been back-ported, and with each new 3.x release the transition from 2.x to 3.x is made easier. But a Python 3.x book will not tell you how to adapt your code to work in Python 2.7. Your Python 3.x code will have surprising effects when run by a Python 2.x interpreter. Take the print() function, for example. In Python 2, print is a statement, so you normally would write print somevalue and that'd print the contents of the variable somevalue to stdout. In Python 3.x, you'd write print(somevalue) instead, as print() is a function there. However, writing print(somevalue) also works in Python 2.x, but Python interprets that as the expression (somevalue) to be printed to stdout, instead of the expression somevalue being passed as an argument to the print() function. And that is subtly different. The distinction becomes more apparent when more than one value is involved; print(onevalue, anothervalue) means, in Python 2.x, to print the tuple (onevalue, anothervalue), while in Python 3.x this means something different, namely to pass two arguments to the print() function, and each one has to be printed with the default separator (a space), which is a very different operation: # Python 2.7 >>> print(1, 2) (1, 2) # Python 3.3 >>> print(1, 2) 1 2 So, although the code looks the same, the effect is quite different. If you want to write code that works consistently across both major python versions, you'd actually need to do the following: # Python 2.7 >>> from __future__ import print_function >>> print(1, 2) 1 2 The __future__ import statement there was added to Python 2.6 and newer to facilitate the transition, to make it easier to write code that'll work the same in both Python 2.x and 3.x. In any module that imports print_function, the print statement is replaced by a print() function. Now, print(onevalue, twovalue) behaves the same way it would in Python 3.x. The print vs. print() change is but one of many changes between the major versions though. Others require a deeper understanding of the language still, and place much heavier demands on someone just starting out with Python. To give you an idea, here is a super-compacted and incomplete overview: I/O has been overhauled, the default string types have been switched, dict methods return views instead of lists, other built-in methods return iterators instead of lists, ordering comparisons have been simplified, the behaviour of the division operator has changed, octal literals have been tightened, you can specify that keyword arguments to a function can only be called using keyword arguments (and not as positionals), a new nonlocal statement to handle nested scoping beyond global vs. local, the import system has been overhauled and made more introspectable and hookable, many standard library modules have been moved around and reworked, etc., etc., etc. If your job requires you to learn Python 2.7, then focus on that. The transition to 3.x will come later, with more experience of what Python is about.
I'm trying to get a file from a database and write it to disk. The file is stored as BLOB. Now I have the following code: #!/usr/bin/python import MySQLdb db2 = MySQLdb.connect(host="localhost", user="root", passwd="root", db="digit") cur = db2.cursor() #get the name of the file cur.execute("SELECT Name FROM ContentFiles WHERE ID=3") nombre = cur.fetchone() #open file and write into. with open(nombre[0],"wb") as output_file: cur.execute("SELECT File FROM ContentFiles WHERE ID=3") ablob = cur.fetchone() output_file.write(ablob[0]) Any help would be appreciated. Thanks :) I debugged and it gets the file and write it into disk, but when I open it shows an error saying: Not a JPEG file: starts with 0x2f 0x39
Ive read your previous answers about this error onthe site, but not sure exactly how i could change my code to fix it in mine. It's a long code, sorry about that :$ import random #QUESTION 1a: #CONSTRUCTOR FUNCTION: def makestk(): #This function returns a new stack in the form of a tagged tuple. stk=('stack',[]) #Inserts the tagged tuple into a variable. return stk #Returns variable holding tagged tuple(empty stack). #PREDICATE FUNCTION: def emptystk(stk): #This function returns True if the stack is empty or false if the stack is not empty. if stk[1] == []: #Checks if the second variable of the tagged tuple is an empty list or not. return True #Returns True if the second variable of the tagged tuple is indeed an empty list. else: return False #Returns False if the second variable of the tagged tuple is a list containing element(s). #MUTATOR FUNCTIONS: def contents(s): #This function is created to aid the mutator functions. return s[1] #Returns the second variable of the tagged tuple. def pushstk(stk,ele): #This function accepts a stack and an element and adds the element to the stack at position0. contents(stk).insert(0,ele) #Uses the helper function to get the second variable of the tagged tuple and inserts the element to the stack at position0. #return stk #Returns the updated version of the tagged tuple entered. def popstk(stk): #This function accepts a stack and an element; and adds the element to the stack at position0. contents(stk).pop(0) #Uses the helper function to get the second variable of the tagged tuple and removes the element in position0 of the stack. #return stk #Returns the updated version of the tagged tuple entered. #QUESTION 1b: #CONSTRUCTOR FUNCTION: def makecell((x,y)): #This function accepts a tuple which consists of the x and y co-ordinate of the cell being created. cell=('cell',(x,y),['t','b','r','l']) #Inserts the tagged tuple into a variable.(The list represents the four walls of the cell: t-top,b-bottom,r-right,l-left) return cell #Returns a cell in the form of a tagged tuple. #ACCESSOR FUNCTIONS: def getx(cell): #This function returns the x co-ordinate of the cell. return cell[1][0] #Returns the first value of the second variable in the tagged tuple. def gety(cell): #This function returns the y co-ordinate of the cell. return cell[1][1] #Returns the second value of the second variable in the tagged tuple. def getwalls(cell): #This function returns the lsit of walls that are intact for a given cell. return cell[2] #Returns the third variable of the tagged tuple. #MUTATOR FUNCTIONS: def removetw(cell): #This function removes the top wall of the cell. getwalls(cell).remove('t') #Uses the 'getwalls' function and removes the first value of list. return cell #Returns the updated version of the list. def removebw(cell): #This function removes the bottom wall of the cell. getwalls(cell).remove('b') #Uses the 'getwalls' function and removes the second value of the list. return cell #Returns the updated version of the list. def removerw(cell): #This function removes the right wall of the cell. getwalls(cell).remove('r') #Uses the 'getwalls' function and removes the third values of the list. return cell #Returns the updated version of the list. def removelw(cell): #This function removes the left wall of the cell. getwalls(cell).remove('l') #Uses the 'getwalls' function and removes the fourth values of the list. return cell #Returns the updated version of the list. #QUESTION 2: def makegrid(w,h): #This function accepts the width and height of the desired grid/matrix of cells and creates it. grid=[] #Inserts an empty list into a variable. for x in range (0,(w)): #Allows x to traverse through the range of 0 to the width value. for y in range (0,(h)): #Allows y to traverse through the range of 0 to the height value. tup=(x,y) #Inserts the values x and y into a variable. grid.append(makecell(tup)) #Inserts/updates the variable 'tup' into the list 'grid' and runs function until x and y have completely traversed through the ranges. return grid #Returns the updated version of grid. #QUESTION 3: def getneighbours (cell,w,h): #This function accepts a cell and the width and height of the grid. lst = [] x = getx(cell) y = gety(cell) if y + 1 <= h: lst = lst + [makecell((x,y + 1))] if y - 1 >= 0: lst = lst + [makecell((x,y - 1))] if x + 1 <= w: lst = lst + [makecell((x + 1,y))] if x - 1 >= 0: lst = lst + [makecell((x - 1,y))] return lst #Returns the neighbouring cell walls. #QUESTION 4: def removewalls(c1,c2): #This function accepts two cells and removes the walls common between the two.accepts two cells if gety(c1)==gety(c2)-1: #Checks if c1 is above c2. removetw(c2) and removebw(c1) #Removes the corresponding common wall shared between the cells. elif gety(c1)==gety(c2)+1: #Checks if c1 is below c2. removetw(c1) and removebw(c2) #Removes the corresponding common wall shared between the cells. elif getx(c1)==getx(c2)-1: #Checks if c1 is to the left of c2. removerw(c1) and removelw(c2) #Removes the corresponding common wall shared between the cells. elif getx(c1)==getx(c2)+1: #Checks if c1 is to the right of c2. removerw(c2) and removelw(c1) #Removes the corresponding common wall shared between the cells. cell0=makecell((0,0)) cell1=makecell((0,1)) #QUESTION 5: def wallsintact(grid,neighbours): return [x for x in grid if x in neighbours and getwalls(x)==['t','b','r','l']] #QUESTION 6: def createmaze(w,h): stack=makestk() totalcells=w*h grid=makegrid(w,h) cell=grid[random.randrange(totalcells)] strt=1 while (strt<totalcells): neighbours=getneighbours(cell,w,h) intact=wallsintact(grid,neighbours) while intact!=[]: maze=intact[random.randrange(len(intact))] intact.remove(maze) removewalls(cell,maze) pushstk(stack,maze) strt=strt+1 cell=popstk(stack) return grid
On Amazon.com… Click through to Amazon.co.uk (which goes to the index page, not the page for the book you were looking at): OK… search for that book on the amazon.co.uk store: I’m stumped. On Amazon.com… Click through to Amazon.co.uk (which goes to the index page, not the page for the book you were looking at): OK… search for that book on the amazon.co.uk store: I’m stumped. The way to seem most formidable as an inexperienced founder is to stick to the truth. Last week I presented two talks at the inaugural NoSQL Europe conference in London. The first was presented with Matthew Wall and covered the ways in which we have been exploring NoSQL at the Guardian. The second was a three hour workshop on Redis, my favourite piece of software to have the NoSQL label applied to it. I’ve written about Redis here before, and it has since earned a place next to MySQL/PostgreSQL and memcached as part of my default web application stack. Redis makes write-heavy features such as real-time statistics feasible for small applications, while effortlessly scaling up to handle larger projects as well. If you haven’t tried it out yet, you’re sorely missing out. For the workshop, I tried to give an overview of each individual Redis feature along with detailed examples of real-world problems that the feature can help solve. I spent the past day annotating each slide with detailed notes, and I think the result makes a pretty good stand-alone tutorial. Here’s the end result: In unrelated news, Nat and I both completed the first ever Brighton Marathon last weekend, in my case taking 4 hours, 55 minutes and 17 seconds. Sincere thanks to everyone who came out to support us - until the race I had never appreciated how important the support of the spectators is to keep going to the end. We raised £757 for the Have a Heart children’s charity. Thanks in particular to Clearleft who kindly offered to match every donation. Two quick updates about WildlifeNearYou. First up, I gave a talk about the site at £5 app, my favourite Brighton evening event which celebrates side projects and the joy of Making Stuff. I talked about the site’s genesis on a fort, crowdsourcing photo ratings, how we use Freebase and DBpedia and how integrating with Flickr’s machine tags gave us a powerful location API for free. Here’s the video of the talk, courtesy of Ian Oszvald: Secondly, I’m excited to note that WildlifeNearYou spin-off OwlsNearYou.com is featured in UK Wired magazine’s Wired / Tired / Expired column… and we’re Wired! Some background reading. I was planning to fill in answers as they arrive, but I screwed up the moderation of the comments and got flooded with detailed responses - I strongly recommend reading the comments. Back in October 2008, myself and 11 others set out on the first /dev/fort expedition. The idea was simple: gather a dozen geeks, rent a fort, take food and laptops and see what we could build in a week. The fort was Fort Clonque on Alderney in the Channel Islands, managed by the Landmark Trust. We spent an incredibly entertaining week there exploring Nazi bunkers, cooking, eating and coding up a storm. It ended up taking slightly longer than a week to finish, but 14 months later the result of our combined efforts can finally be revealed: WildlifeNearYou.com! WildlifeNearYou is a site for people who like to see animals. Have you ever wanted to know where your nearest Llama is? Search for “llamas near brighton" and you’ll see that there’s one 18 miles away at Ashdown Forest Llama Farm. Or you can see all the places we know about in France, or all the trips I’ve been on, or everywhere you can see a Red Panda. The data comes from user contributions: you can use WildlifeNearYou to track your trips to wildlife places and list the animals that you see there. We can only tell you about animals that someone else has already spotted. Once you’ve added some trips, you can import your Flickr photos and match them up with trips and species. We’ll be adding a feature in the future that will push machine tags and other metadata back to Flickr for you, if you so choose. So why did it take so long to finally launch it? A whole bunch of reasons. Week long marathon hacking sessions are an amazing way to generate a ton of interesting ideas and build a whole bunch of functionality, but it’s very hard to get a single cohesive whole at the end of it. Tying up the loose ends is a pretty big job and is severely hampered by the fort residents returning to their real lives, where hacking for 5 hours straight on a cool easter egg suddenly doesn’t seem quite so appealing. We also got stuck in a cycle of “just one more thing”. On the fort we didn’t have internet access, so internet-dependent features like Freebase integration, Google Maps, Flickr imports and OpenID had to be left until later (“they’ll only take a few hours” no longer works once you’re off /dev/fort time). The biggest problem though was perfectionism. The longer a side-project drags on for, the more important it feels to make it “just perfect” before releasing it to the world. Finally, on New Year’s Day, Nat and I decided we had had enough. Our resolution was to “ship the thing within a week, no matter what state it’s in”. We’re a few days late, but it’s finally live. WildlifeNearYou is by far the most fun website I’ve ever worked on. To all twelve of my intrepid fort companions: congratulations - we made a thing! As you may have heard, the UK government released a fresh batch of MP expenses documents a week ago on Thursday. I spent that week working with a small team at Guardian HQ to prepare for the release. Here’s what we built: It’s a crowdsourcing application that asks the public to help us dig through and categorise the enormous stack of documents - around 30,000 pages of claim forms, scanned receipts and hand-written letters, all scanned and published as PDFs. This is the second time we’ve tried this - the first was back in June, and can be seen at mps-expenses.guardian.co.uk. Last week’s attempt was an opportunity to apply the lessons we learnt the first time round. Writing crowdsourcing applications in a newspaper environment is a fascinating challenge. Projects have very little notice - I heard about the new document release the Thursday before giving less than a week to put everything together. In addition to the fast turnaround for the application itself, the 48 hours following the release are crucial. The news cycle moves fast, so if the application launches but we don’t manage to get useful data out of it quickly the story will move on before we can impact it. ScaleCamp on the Friday meant that development work didn’t properly kick off until Monday morning. The bulk of the work was performed by two server-side developers, one client-side developer, one designer and one QA on Monday, Tuesday and Wednesday. The Guardian operations team deftly handled our EC2 configuration and deployment, and we had some extra help on the day from other members of the technology department. After launch we also had a number of journalists helping highlight discoveries and dig through submissions. The system was written using Django, MySQL (InnoDB), Redis and memcached. The biggest mistake we made the first time round was that we asked the wrong question. We tried to get our audience to categorise documents as either “claims” or “receipts” and to rank them as “not interesting”, “a bit interesting”, “interesting but already known” and “someone should investigate this”. We also asked users to optionally enter any numbers they saw on the page as categorised “line items”, with the intention of adding these up later. The line items, with hindsight, were a mistake. 400,000 documents makes for a huge amount of data entry and for the figures to be useful we would need to confirm their accuracy. This would mean yet more rounds of crowdsourcing, and the job was so large that the chance of getting even one person to enter line items for each page rapidly diminished as the news story grew less prominent. The categorisations worked reasonably well but weren’t particularly interesting - knowing if a document is a claim or receipt is useful only if you’re going to collect line items. The “investigate this” button worked very well though. We completely changed our approach for the new system. We dropped the line item task and instead asked our users to categories each page by applying one or more tags, from a small set that our editors could control. This gave us a lot more flexibility - we changed the tags shortly before launch based on the characteristics of the documents - and had the potential to be a lot more fun as well. I’m particularly fond of the “hand-written” tag, which has highlighted some lovely examples of correspondence between MPs and the expenses office. Sticking to an editorially assigned set of tags provided a powerful tool for directing people’s investigations, and also ensured our users didn’t start creating potentially libellous tags of their own. For the first project, everyone worked together on the same task to review all of the documents. This worked fine while the document set was small, but once we had loaded in 400,000+ pages the progress bar become quite depressing. This time round, we added a new concept of “assignments”. Each assignment consisted of the set of pages belonging to a specified list of MPs, documents or political parties. Assignments had a threshold, so we could specify that a page must be reviewed by at least X people before it was considered reviewed. An editorial tool let us feature one “main” assignment and several alternative assignments right on the homepage. Clicking “start reviewing” on an assignment sets a cookie for that assignment, and adds the assignment’s progress bar to the top of the review interface. New pages are selected at random from the set of unreviewed pages in that assignment. The assignments system proved extremely effective. We could use it to direct people to the highest value documents (our top hit list of interesting MPs, or members of the shadow cabinet) while still allowing people with specific interests to pick an alternative task. Having run two crowdsourcing projects I can tell you this: the single most important piece of code you will write is the code that gives someone something new to review. Both of our projects had big “start reviewing” buttons. Both were broken in different ways. The first time round, the mistakes were around scalability. I used a SQL “ORDER BY RAND()” statement to return the next page to review. I knew this was an inefficient operation, but I assumed that it wouldn’t matter since the button would only be clicked occasionally. Something like 90% of our database load turned out to be caused by that one SQL statement, and it only got worse as we loaded more pages in to the system. This caused multiple site slow downs and crashes until we threw together a cron job that pushed 1,000 unreviewed page IDs in to memcached and made the button pick one of those at random. This solved the performance problem, but meant that our user activity wasn’t nearly as well targeted. For optimum efficiency you really want everyone to be looking at a different page - and a random distribution is almost certainly the easiest way to achieve that. The second time round I turned to my new favourite in-memory data structure server, redis, and its SRANDMEMBER command (a feature I requested a while ago with this exact kind of project in mind). The system maintains a redis set of all IDs that needed to be reviewed for an assignment to be complete, and a separate set of IDs of all pages had been reviewed. It then uses redis set intersection (the SDIFFSTORE command) to create a set of unreviewed pages for the current assignment and then SRANDMEMBER to pick one of those pages. This is where the bug crept in. Redis was just being used as an optimisation - the single point of truth for whether a page had been reviewed or not stayed as MySQL. I wrote a couple of Django management commands to repopulate the denormalised Redis sets should we need to manually modify the database. Unfortunately I missed some - the sets that tracked what pages were available in each document. The assignment generation code used an intersection of these sets to create the overall set of documents for that assignment. When we deleted some pages that had accidentally been imported twice I failed to update those sets. This meant the “next page” button would occasionally turn up a page that didn’t exist. I had some very poorly considered fallback logic for that - if the random page didn’t exist, the system would return the first page in that assignment instead. Unfortunately, this meant that when the assignment was down to the last four non-existent pages every single user was directed to the same page - which subsequently attracted well over a thousand individual reviews. Next time, I’m going to try and make the “next” button completely bullet proof! I’m also going to maintain a “denormalisation dictionary” documenting every denormalisation in the system in detail - such a thing would have saved me several hours of confused debugging. The biggest mistake I made last time was not getting the data back out again fast enough for our reporters to effectively use it. It took 24 hours from the launch of the application to the moment the first reporting feature was added - mainly because we spent much of the intervening time figuring out the scaling issues. This time we handled this a lot better. We provided private pages exposing all recent activity on the site. We also provided public pages for each of the tags, as well as combination pages for party + tag, MP + tag, document + tag, assignment + tag and user + tag. Most of these pages were ordered by most-tagged, with the hope that the most interesting pages would quickly bubble to the top. This worked pretty well, but we made one key mistake. The way we were ordering pages meant that it was almost impossible to paginate through them and be sure that you had seen everything under a specific tag. If you’re trying to keep track of everything going on in the site, reliable pagination is essential. The only way to get reliable pagination on a fast moving site is to order by the date something was first added to a set in ascending order. That way you can work through all of the pages, wait a bit, hit “refresh” and be able to continue paginating where you left off. Any other order results in the content of each page changing as new content comes in. We eventually added an undocumented /in-order/ URL prefix to address this issue. Next time I’ll pay a lot more attention to getting the pagination options right from the start. The reviewing experience the first time round was actually quite lonely. We deliberately avoided showing people how others had marked each page because we didn’t want to bias the results. Unfortunately this meant the site felt like a bit of a ghost town, even when hundreds of other people were actively reviewing things at the same time. For the new version, we tried to provide a much better feeling of activity around the site. We added “top reviewer” tables to every assignment, MP and political party as well as a “most active reviewers in the past 48 hours” table on the homepage (this feature was added to the first project several days too late). User profile pages got a lot more attention, with more of a feel that users were collecting their favourite pages in to tag buckets within their profile. Most importantly, we added a concept of discoveries - editorially highlighted pages that were shown on the homepage and credited to the user that had first highlighted them. These discoveries also added valuable editorial interest to the site, showing up on the homepage and also the index pages for political parties and individual MPs. For both projects, we implemented an extremely light-weight form of registration. Users can start reviewing pages without going through any signup mechanism, and instead are assigned a cookie and an anon-454 style username the first time they review a document. They are then encouraged to assign themselves a proper username and password so they can log in later and take credit for their discoveries. It’s difficult to tell how effective this approach really is. I have a strong hunch that it dramatically increases the number of people who review at least one document, but without a formal A/B test it’s hard to tell how true that is. The UI for this process in the first project was quite confusing - we gave it a solid makeover the second time round, which seems to have resulted in a higher number of conversions. News-based crowdsourcing projects of this nature are both challenging and an enormous amount of fun. For the best chances of success, be sure to ask the right question, ensure user contributions are rewarded, expose as much data as possible and make the “next thing to review” behaviour rock solid. I’m looking forward to the next opportunity to apply these lessons, although at this point I really hope it involves something other than MPs’ expenses. I gave a talk on Friday at Full Frontal, a new one day JavaScript conference in my home town of Brighton. I ended up throwing away my intended topic (JSONP, APIs and cross-domain security) three days before the event in favour of a technology which first crossed my radar less than two weeks ago. That technology is Ryan Dahl’s Node. It’s the most exciting new project I’ve come across in quite a while. At first glance, Node looks like yet another take on the idea of server-side JavaScript, but it’s a lot more interesting than that. It builds on JavaScript’s excellent support for event-based programming and uses it to create something that truly plays to the strengths of the language. Node describes itself as “evented I/O for V8 javascript”. It’s a toolkit for writing extremely high performance non-blocking event driven network servers in JavaScript. Think similar to Twisted or EventMachine but for JavaScript instead of Python or Ruby. As I discussed in my talk, event driven servers are a powerful alternative to the threading / blocking mechanism used by most popular server-side programming frameworks. Typical frameworks can only handle a small number of requests simultaneously, dictated by the number of server threads or processes available. Long-running operations can tie up one of those threads - enough long running operations at once and the server runs out of available threads and becomes unresponsive. For large amounts of traffic, each request must be handled as quickly as possible to free the thread up to deal with the next in line. This makes certain functionality extremely difficult to support. Examples include handling large file uploads, combining resources from multiple backend web APIs (which themselves can take an unpredictable amount of time to respond) or providing comet functionality by holding open the connection until a new event becomes available. Event driven programming takes advantage of the fact that network servers spend most of their time waiting for I/O operations to complete. Operations against in-memory data are incredibly fast, but anything that involves talking to the filesystem or over a network inevitably involves waiting around for a response. With Twisted, EventMachine and Node, the solution lies in specifying I/O operations in conjunction with callbacks. A single event loop rapidly switches between a list of tasks, firing off I/O operations and then moving on to service the next request. When the I/O returns, execution of that particular request is picked up again. (In the talk, I attempted to illustrate this with a questionable metaphor involving hamsters, bunnies and a hyperactive squid). If systems like this already exist, what’s so exciting about Node? Quite a few things: With both my JavaScript and server-side hats on, Node just feels right. The APIs make sense, it fits a clear niche and despite its youth (the project started in February) everything feels solid and well constructed. The rapidly growing community is further indication that Ryan is on to something great here. Here’s how to get Hello World running in Node in 7 easy steps: Here’s helloworld.js: var sys = require('sys'), http = require('http'); http.createServer(function(req, res) { res.sendHeader(200, {'Content-Type': 'text/html'}); res.sendBody('<h1>Hello World</h1>'); res.finish(); }).listen(8080); sys.puts('Server running at http://127.0.0.1:8080/'); If you have Apache Bench installed, try running ab -n 1000 -c 100 ‘http://127.0.0.1:8080/’ to test it with 1000 requests using 100 concurrent connections. On my MacBook Pro I get 3374 requests a second. So Node is fast - but where it really shines is concurrency with long running requests. Alter the helloworld.js server definition to look like this: http.createServer(function(req, res) { setTimeout(function() { res.sendHeader(200, {'Content-Type': 'text/html'}); res.sendBody('<h1>Hello World</h1>'); res.finish(); }, 2000); }).listen(8080); We’re using setTimeout to introduce an artificial two second delay to each request. Run the benchmark again - I get 49.68 requests a second, with every single request taking between 2012 and 2022 ms. With a two second delay, the best possible performance for 1000 requests 100 at a time is 1000 requests / (1000 / 100) * 2 seconds = 50 requests a second. Node hits it pretty much bang on the nose. The most important line in the above examples is res.finish(). This is the mechanism Node provides for explicitly signalling that a request has been fully processed and should be returned to the browser. By making it explicit, Node makes it easy to implement comet patterns like long polling and streaming responses - stuff that is decidedly non trivial in most server-side frameworks. Node’s core APIs are pretty low level - it has HTTP client and server libraries, DNS handling, asynchronous file I/O etc, but it doesn’t give you much in the way of high level web framework APIs. Unsurprisingly, this has lead to a cambrian explosion of lightweight web frameworks based on top of Node - the projects using node page lists a bunch of them. Rolling a framework is a great way of learning a low-level API, so I’ve thrown together my own - djangode - which brings Django’s regex-based URL handling to Node along with a few handy utility functions. Here’s a simple djangode application: var dj = require('./djangode'); var app = dj.makeApp([ ['^/$', function(req, res) { dj.respond(res, 'Homepage'); }], ['^/other$', function(req, res) { dj.respond(res, 'Other page'); }], ['^/page/(\\d+)$', function(req, res, page) { dj.respond(res, 'Page ' + page); }] ]); dj.serve(app, 8008); djangode is currently a throwaway prototype, but I’ll probably be extending it with extra functionality as I explore more Node related ideas. My main demo in the Full Frontal talk was nodecast, an extremely simple broadcast-oriented comet application. Broadcast is my favourite “hello world” example for comet because it’s both simpler than chat and more realistic - I’ve been involved in plenty of projects that could benefit from being able to broadcast events to their audience, but few that needed an interactive chat room. The source code for the version I demoed can be found on GitHub in the no-redis branch. It’s a very simple application - the client-side JavaScript simply uses jQuery’s getJSON method to perform long-polling against a simple URL endpoint: function fetchLatest() { $.getJSON('/wait?id=' + last_seen, function(d) { $.each(d, function() { last_seen = parseInt(this.id, 10) + 1; ul.prepend($('<li></li>').text(this.text)); }); fetchLatest(); }); } Doing this recursively is probably a bad idea since it will eventually blow the browser’s JavaScript stack, but it works OK for the demo. The more interesting part is the server-side /wait URL which is being polled. Here’s the relevant Node/djangode code: var message_queue = new process.EventEmitter(); var app = dj.makeApp([ // ... ['^/wait$', function(req, res) { var id = req.uri.params.id || 0; var messages = getMessagesSince(id); if (messages.length) { dj.respond(res, JSON.stringify(messages), 'text/plain'); } else { // Wait for the next message var listener = message_queue.addListener('message', function() { dj.respond(res, JSON.stringify(getMessagesSince(id)), 'text/plain' ); message_queue.removeListener('message', listener); clearTimeout(timeout); }); var timeout = setTimeout(function() { message_queue.removeListener('message', listener); dj.respond(res, JSON.stringify([]), 'text/plain'); }, 10000); } }] // ... ]); The wait endpoint checks for new messages and, if any exist, returns immediately. If there are no new messages it does two things: it hooks up a listener on the message_queue EventEmitter (Node’s equivalent of jQuery/YUI/Prototype’s custom events) which will respond and end the request when a new message becomes available, and also sets a timeout that will cancel the listener and end the request after 10 seconds. This ensures that long polls don’t go on too long and potentially cause problems - as far as the browser is concerned it’s just talking to a JSON resource which takes up to ten seconds to load. When a message does become available, calling message_queue.emit(‘message’) will cause all waiting requests to respond with the latest set of messages. nodecast keeps track of messages using an in-memory JavaScript array, which works fine until you restart the server and lose everything. How do you implement persistent storage? For the moment, the easiest answer lies with the NoSQL ecosystem. Node’s focus on non-blocking I/O makes it hard (but not impossible) to hook it up to regular database client libraries. Instead, it strongly favours databases that speak simple protocols over a TCP/IP socket - or even better, databases that communicate over HTTP. So far I’ve tried using CouchDB (with node-couch) and redis (with redis-node-client), and both worked extremely well. nodecast trunk now uses redis to store the message queue, and provides a nice example of working with a callback-based non-blocking database interface: var db = redis.create_client(); var REDIS_KEY = 'nodecast-queue'; function addMessage(msg, callback) { db.llen(REDIS_KEY, function(i) { msg.id = i; // ID is set to the queue length db.rpush(REDIS_KEY, JSON.stringify(msg), function() { message_queue.emit('message', msg); callback(msg); }); }); } Relational databases are coming to Node. Ryan has a PostgreSQL adapter in the works, thanks to that database already featuring a mature non-blocking client library. MySQL will be a bit tougher - Node will need to grow a separate thread pool to integrate with the official client libs - but you can talk to MySQL right now by dropping in DBSlayer from the NY Times which provides an HTTP interface to a pool of MySQL servers. I don’t see myself switching all of my server-side development over to JavaScript, but Node has definitely earned a place in my toolbox. It shouldn’t be at all hard to mix Node in to an existing server-side environment - either by running both behind a single HTTP proxy (being event-based itself, nginx would be an obvious fit) or by putting Node applications on a separate subdomain. Node is a tempting option for anything involving comet, file uploads or even just mashing together potentially slow loading web APIs. Expect to hear a lot more about it in the future. I’ve been getting a lot of useful work done with Redis recently. Redis is typically categorised as yet another of those new-fangled NoSQL key/value stores, but if you look closer it actually has some pretty unique characteristics. It makes more sense to describe it as a “data structure server” - it provides a network service that exposes persistent storage and operations over dictionaries, lists, sets and string values. Think memcached but with list and set operations and persistence-to-disk. It’s also incredibly easy to set up, ridiculously fast (30,000 read or writes a second on my laptop with the default configuration) and has an interesting approach to persistence. Redis runs in memory, but syncs to disk every Y seconds or after every X operations. Sounds risky, but it supports replication out of the box so if you’re worried about losing data should a server fail you can always ensure you have a replicated copy to hand. I wouldn’t trust my only copy of critical data to it, but there are plenty of other cases for which it is really well suited. I’m currently not using it for data storage at all - instead, I use it as a tool for processing data using the interactive Python interpreter. I’m a huge fan of REPLs. When programming Python, I spend most of my time in an IPython prompt. With JavaScript, I use the Firebug console. I experiment with APIs, get something working and paste it over in to a text editor. For some one-off data transformation problems I never save any code at all - I run a couple of list comprehensions, dump the results out as JSON or CSV and leave it at that. Redis is an excellent complement to this kind of programming. I can run a long running batch job in one Python interpreter (say loading a few million lines of CSV in to a Redis key/value lookup table) and run another interpreter to play with the data that’s already been collected, even as the first process is streaming data in. I can quit and restart my interpreters without losing any data. And because Redis semantics map closely to Python native data types, I don’t have to think for more than a few seconds about how I’m going to represent my data. Here’s a 30 second guide to getting started with Redis: $ wget http://redis.googlecode.com/files/redis-1.01.tar.gz $ tar -xzf redis-1.01.tar.gz $ cd redis-1.01 $ make $ ./redis-server And that’s it - you now have a Redis server running on port 6379. No need even for a ./configure or make install. You can run ./redis-benchmark in that directory to exercise it a bit. Let’s try it out from Python. In a separate terminal: $ cd redis-1.01/client-libraries/python/ $ python >>> import redis >>> r = redis.Redis() >>> r.info() {u'total_connections_received': 1, ... } >>> r.keys('*') # Show all keys in the database [] >>> r.set('key-1', 'Value 1') 'OK' >>> r.keys('*') [u'key-1'] >>> r.get('key-1') u'Value 1' Now let’s try something a bit more interesting: >>> r.push('log', 'Log message 1', tail=True) >>> r.push('log', 'Log message 2', tail=True) >>> r.push('log', 'Log message 3', tail=True) >>> r.lrange('log', 0, 100) [u'Log message 3', u'Log message 2', u'Log message 1'] >>> r.push('log', 'Log message 4', tail=True) >>> r.push('log', 'Log message 5', tail=True) >>> r.push('log', 'Log message 6', tail=True) >>> r.ltrim('log', 0, 2) >>> r.lrange('log', 0, 100) [u'Log message 6', u'Log message 5', u'Log message 4'] That’s a simple capped log implementation (similar to a MongoDB capped collection) - push items on to the tail of a ‘log’ key and use ltrim to only retain the last X items. You could use this to keep track of what a system is doing right now without having to worry about storing ever increasing amounts of logging information. See the documentation for a full list of Redis commands. I’m particularly excited about the RANDOMKEY and new SRANDMEMBER commands (git trunk only at the moment), which help address the common challenge of picking a random item without ORDER BY RAND() clobbering your relational database. In a beautiful example of open source support in action, I requested SRANDMEMBER on Twitter yesterday and antirez committed just 12 hours later. I used Redis this week to help create heat maps of the BNP’s membership list for the Guardian. I had the leaked spreadsheet of the BNP member details and a (licensed) CSV file mapping 1.6 million postcodes to their corresponding parliamentary constituencies. I loaded the CSV file in to Redis, then looped through the 12,000 postcodes from the membership and looked them up in turn, accumulating counts for each constituency. It took a couple of minutes to load the constituency data and a few seconds to run and accumulate the postcode counts. In the end, it probably involved less than 20 lines of actual Python code. A much more interesting example of an application built on Redis is Hurl, a tool for debugging HTTP requests built in 48 hours by Leah Culver and Chris Wanstrath. The code is now open source, and Chris talks a bit more about the implementation (in particular their use of sort in Redis) on his blog. Redis also gets a mention in Tom Preston-Werner’s epic writeup of the new scalable architecture behind GitHub.
I'm trying to package the pychess package into a zip file and import it with zipimport, but running into some issues. I've packaged it into a zipfile with the following script, which works: #!/usr/bin/env python import zipfile zf = zipfile.PyZipFile('../pychess.zip.mod', mode='w') try: zf.writepy('.') finally: zf.close() for name in zf.namelist(): print name However, I'm unable to do complicated imports in my code: z = zipimport.zipimporter('./pychess.zip.mod') #z.load_module('pychess') # zipimport.ZipImportError: can't find module 'pychess' #z.load_module('Utils.lutils') # zipimport.ZipImportError: can't find module 'Utils.lutils' Utils = z.load_module('Utils') # seems to work, but... from Utils import lutils #from Utils.lutils import LBoard # ImportError: No module named pychess.Utils.const How can I import, e.g. pychess.Utils.lutils.LBoard from the zip file? Here is the full list of modules I need to import: import pychess from pychess.Utils.lutils import LBoard from pychess.Utils.const import * from pychess.Utils.lutils import lmovegen from pychess.Utils.lutils import lmove Thanks!
In python, True and False are integers (1 and 0 respectively). You could use a boolean (True or False) and the not operator: var = not var Of course, if you want to iterate between other numbers than 0 and 1, this trick becomes a little more difficult. To pack this into an admittedly ugly function: def alternate(): alternate.x=not alternate.x return alternate.x alternate.x=True #The first call to alternate will return False (0) mylist=[5,3] print(mylist[alternate()]) #5 print(mylist[alternate()]) #3 print(mylist[alternate()]) #5
I'm trying to write some python code which can create multipart mime http requests in the client, and then appropriately interpret then on the server. I have, I think, partially succeeded on the client end with this: from email.mime.multipart import MIMEMultipart, MIMEBase import httplib h1 = httplib.HTTPConnection('localhost:8080') msg = MIMEMultipart() fp = open('myfile.zip', 'rb') base = MIMEBase("application", "octet-stream") base.set_payload(fp.read()) msg.attach(base) h1.request("POST", "http://localhost:8080/server", msg.as_string()) The only problem with this is that the email library also includes the Content-Type and MIME-Version headers, and I'm not sure how they're going to be related to the HTTP headers included by httplib: Content-Type: multipart/mixed; boundary="===============2050792481==" MIME-Version: 1.0 --===============2050792481== Content-Type: application/octet-stream MIME-Version: 1.0 This may be the reason that when this request is received by my web.py application, I just get an error message. The web.py POST handler: class MultipartServer: def POST(self, collection): print web.input() Throws this error: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 242, in process return self.handle() File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 233, in handle return self._delegate(fn, self.fvars, args) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 415, in _delegate return handle_class(cls) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 390, in handle_class return tocall(*args) File "/home/richard/Development/server/webservice.py", line 31, in POST print web.input() File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/webapi.py", line 279, in input return storify(out, *requireds, **defaults) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 150, in storify value = getvalue(value) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 139, in getvalue return unicodify(x) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 130, in unicodify if _unicode and isinstance(s, str): return safeunicode(s) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 326, in safeunicode return obj.decode(encoding) File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 137-138: invalid data My line of code is represented by the error line about half way down: File "/home/richard/Development/server/webservice.py", line 31, in POST print web.input() It's coming along, but I'm not sure where to go from here. Is this a problem with my client code, or a limitation of web.py (perhaps it just can't support multipart requests)? Any hints or suggestions of alternative code libraries would be gratefully received. EDIT The error above was caused by the data not being automatically base64 encoded. Adding encoders.encode_base64(base) Gets rid of this error, and now the problem is clear. HTTP request isn't being interpreted correctly in the server, presumably because the email library is including what should be the HTTP headers in the body instead: <Storage {'Content-Type: multipart/mixed': u'', ' boundary': u'"===============1342637378=="\n' 'MIME-Version: 1.0\n\n--===============1342637378==\n' 'Content-Type: application/octet-stream\n' 'MIME-Version: 1.0\n' 'Content-Transfer-Encoding: base64\n' '\n0fINCs PBk1jAAAAAAAAA.... etc So something is not right there. Thanks Richard
The idea of recursion is not very common in real world. So, it seems a bit confusing to the novice programmers. Though, I guess, they become used to the concept gradually. So, what can be a nice explanation for them to grasp the idea easily? To explain recursion, I use a combination of different explanation, usually to both try to: Maths If your student (or the person you explain too, from now on I'll say student) has at least some mathematical background, they've obviously already encountered recursion by studying series and their notion of recursivity and their recurrence relation. A very good way to start is then to demonstrate with a series and tell that it's quite simply what recursion is about: Usually, you either get a "huh huh, whatev'" at best because they still do not use it, or more likely just a very deep snore. Coding Examples At this stage, my students usually know how to print something to the screen. Assuming we are using C, they know how to print a single char using I usually resort to a few repetitive and simple programming problems until they get it: Factorial is a very simple math concept to understand, and the implementation is very close to its mathematical representation. However, they might not get it at first. The alphabet version is interesting to teach them to think about the ordering of their recursive statements. Like with pointers, they will just throw lines randomly at you. The point is to bring them to the realization that a loop can be inverted by either modifying the conditions The exponentiation problem is slightly more difficult ( Its simple form: can be expressed like this by recurrence: Once these simple problems have been shown AND re-implemented in tutorials, you can give slightly more difficult (but very classic) exercises: Helpers Some reading never hurts. Well it will at first, and they'll feel even more lost. It's the sort of thing that grows on you and that sits in the back of your head until one day your realize that you finally get it. And then you think back of these stuff you read. The recursion, recursion in Computer Science and recurrence relation pages on Wikipedia would do for now. Assuming your students do not have much coding experience, provide code stubs. After the first attempts, give them a printing function that can display the recursion level. Printing the numerical value of the level helps. Indenting a printed result (or the level's output) helps as well, as it gives another visual representation of what your program is doing, opening and closing stack contexts like drawers, or folders in a file system explorer. If your student is already a bit versed into computer culture, they might already use some projects/softwares with names using recursive acronyms. It's been a tradition going around for some time, especially in GNU projects. Some examples include: Recursive: Mutually Recursive: Have them try to come up with their own. Similarly, there are many occurrences of recursive humor, like Google's recursive search correction. For more information on recursion, read this answer. Pitfalls and Further Learning Some issues that people usually struggle with and for which you need to know answers. Why would you do that? A good but non-obvious reason is that it is often simpler to express a problem that way. A not-so-good but obvious reason is that it often takes less typing (don't make them feel soooo l33t for just using recursion though...). Some problems are definitely easier to solve when using a recursive approach. Typically, any problem you can solve using a Divide and Conquer paradigm will fit a multi-branched recursion algorithm. Why is my How do I determine my end condition? That's easy, just have them say the steps out loud. For instance for the factorial start from 5, then 4, then ... until 0. Do not talk to early abut things like tail call optimization. I know, I know, TCO is nice, but they don't care at first. Give them some time to wrap their heads around the process in a way that works for them. Feel free to shatter their world again later on, but give them a break. Similarly, don't talk straight from the first lecture about the call stack and its memory consumption and ... well... the stack overflow. I often tutor students privately who show me lectures where they have 50 slides about But please, in due time, make it clear that there are reasons to go the iterative or recursive route. We've seen that functions can be recursive, and even that they can have multiple call points (8-queens, Hanoi, Fibonacci or even an exploration algorithm for a minesweeper). But what about mutually recursive calls? Start with maths here as well. Starting with just mathematical series makes it easier to write and implement as the contract is clearly defined by the expressions. For instance, the Hofstadter Female and Male Sequences: However in terms of code, it is to be noted that the implementation of a mutually recursive solution often leads to code duplication and should rather be streamlined into a single recursive form (See Peter Norvig's Solving Every Sudoku Puzzle. Recursion is a function that calls itself. How to use it, when to use it and how to avoid bad design are important to know, which requires you to try it out for yourself, and understand what happens. The most important thing you need to know is to be very careful to not get a loop that never ends. The answer from pramodc84 to your question has this fault: It never ends... The most classic example to use recursion, is to work with a tree with no static limits in depth. This is a task that you must use recursion. Every recursive function tends to: When step 2 is before 3, and when step 4 is trivial (a concatenation, sum, or nothing) this enables tail recursion. Step 2 often must come after step 3, as the results from the subdomain(s) of the problem may be needed in order to complete the current step. Take the traversal of a straight forward binary tree. The traversal can be made in pre-order, in-order, or post-order, depending on what is required. traverse(tree): visit the node traverse(left) traverse(right) traverse(tree): traverse(left) visit the node traverse(right) traverse(tree): traverse(left) traverse(right) visit the node Okay i am going to try to keep this simple and concise. Recursive function are functions that call themselves. Recursive function consist of three things: Best ways to write recursive methods, is to think of the method that you trying to write as a simple example only handling one loop of the process you want to iterate over, then add the call to the method itself, and add when you want to terminate. Best way to learn is to practice like all things. Since this is programmers website I will refrain from writing code but here is a good link if you got that joke you got what recursion means. The OP said that recursion doesn't exist in the real world, but I beg to differ. Let's take the real world 'operation' of cutting up a pizza. You've taken the pizza out of the oven and in order to serve it up you have to cut it in half, then cut those halves in half, then again cut those resultant halves in half. The operation of cutting the pizza you performing again and again until you've got the result you want (the number of slices). And for arguments sake let's say that an uncut pizza is a slice itself. Here's an example in Ruby: def cut_pizza(existing_slices, desired_slices) if existing_slices != desired_slices # we don't have enough slices yet to feed everyone, so # we're cutting the pizza slices, thus doubling their number new_slices = existing_slices * 2 # and this here is the recursive call cut_pizza(new_slices, desired_slices) else # we have the desired number of slices, so we return # here instead of continuing to recurse return existing_slices end end pizza = 1 # a whole pizza, 'one slice' cut_pizza(pizza, 8) # => we'll get 8 So the real world operation is cutting a pizza, and the recursion is doing the same thing over and over until you have what you want. Operations you'll find that crop up that you can implement with recursive functions are: I recommend writing a program to look for a file based on it's file name, and try to write a function that calls itself until it's found, the signature would look like this: So you could call it like this: It's simply programming mechanics in my opinion, a way of cleverly removing duplication. You can rewrite this by using variables, but this is a 'nicer' solution. There is nothing mysterious or difficult about it. You'll write a couple of recursive functions, it'll click and Recursion is a tool a programmer can use to invoke a function call on itself. Fibonacci sequence is the textbook example of how recursion is used. Most recursive code if not all can be expressed as iterative function, but its usually messy. Good examples of other recursive programs are Data Structures such as trees, binary search tree and even quicksort. Recursion is used to make code less sloppy, keep in mind it is usually slower and requires more memory. I like to use this one: How do you walk to the store? If you're at the entrance to the store, simply go through it. Otherwise, take one step, then walk the rest of the way to the store. It's critical to include three aspects: We actually use recursion a lot in daily life; we just don't think of it that way. The best example that I would point you to is C Programming Language by K & R. In that book (and I am quoting from memory), the entry in index page for recursion (alone) lists the actual page where they talk about recursion and the index page as well. The classic example is finding the factorial of a number, n!. 0!=1, and for any other natural number N, the factorial of N is the product of all natural numbers less than or equal to N. So, 6! = 6*5*4*3*2*1 = 720. This basic definition would allow you to create a simple iterative solution: int Fact(int degree) { int result = 1; for(int i=degree; i>1; i--) result *= i; return result; } However, examine the operation again. 6! = 6*5*4*3*2*1. By the same definition, 5! = 5*4*3*2*1, meaning that we can say 6! = 6*(5!). In turn, 5! = 5*(4!) and so on. By doing this, we reduce the problem to an operation performed on the result of all previous operations. This eventually reduces to a point, called a base case, where the result is known by definition. In our case, 0!=1 (we could in most cases also say that 1!=1). In computing, we are often allowed to define algorithms in a very similar way, by having the method call itself and pass a smaller input, thus reducing the problem through many recursions to a base case: int Fact(int degree) { if(degree==0) return 1; //the base case; 0! = 1 by definition else return degree * Fact(degree -1); //the recursive case; N! = N*(N-1)! } This can, in many languages, be further simplified using the ternary operator (sometimes seen as an Iif function in languages that don't provide the operator as such): int Fact(int degree) { //reads equivalently to the above, but is concise and often optimizable return degree==0 ? 1: degree * Fact(degree -1); } Advantages: Disadvantages: The example I use is a problem I faced in real life. You have a container (such as a large backpack you intend to take on a trip) and you want to know the total weight. You have in the container two or three loose items, and some other containers (say, stuff sacks.) The weight of the total container is obviously the weight of the empty container plus the weight of everything in it. For the loose items, you can just weigh them, and for the stuff sacks you could just weigh them or you could say "well the weight of each stuffsack is the weight of the empty container plus the weight of everything in it". And then you keep going into containers into containers and so on until you get to a point where there are just loose items in a container. That's recursion. You may think that never happens in real life, but imagine trying to count, or add up the salaries of, people in a particular company or division, which has a mixture of people who just work for the company, people in divisions, then in the divisions there are departments and so on. Or sales in a country that has regions, some of which have subregions, etc etc. These sorts of problems happen all the time in business. Josh K already mentioned the Matroshka dolls. Assume that you want to learn something that only the shortest doll knows. The problem is that you can't really talk to her directly, because she originally lives inside the taller doll which on the first picture is placed on her left. This structure goes like that (a doll lives inside the taller doll) until ending up only with the tallest one. So the only thing that you can do is to ask your question to the tallest doll. The tallest doll (who doesn't know the answer) will need to pass your question to the shorter doll (which on the first picture is on her right). Since she also doesn't have the answer she needs to ask the next shorter doll. This will go like that until the message reaches the shortest doll. The shortest doll (who is the only one who knows the secret answer) will pass the answer to the next taller doll (found on her left), which will pass it to the next taller doll... and this will continue until the answer reaches its final destination, which is the tallest doll and finally... you :) This is what recursion really does. A function/method calls itself until getting the expected answer. That's why when you write recursive code it's very important to decide about when recursion should terminate. Not the best explanation but it hopefully helps. Recursion can be used to solve a lot of counting problems. For example, say you have a group of n people at a party (n > 1), and everyone shakes everyone else's hand exactly once. How many handshakes take place? You may know that the solution is C(n,2) = n(n-1)/2, but you can solve recursively as follows: Suppose there are just two people. Then (by inspection) the answer is obviously 1. Suppose you have three people. Single out one person, and note that he/she shakes hands with two other people. After that you have to count just the handshakes between the other two people. We already did that just now, and it is 1. So the answer is 2 + 1 = 3. Suppose you have n people. Following the same logic as before, it is (n-1) + (number of handshakes between n-1 people). Expanding, we get (n-1) + (n-2) + ... + 1. Expressed as a recursive function, f(2) = 1 In life (as opposed to in a computer programme) recursion rarely happens under our direct control, because it can be confusing to make happen. Also, perception tends to be about the side effects, rather than being functionally pure, so if recursion is happening you might not notice it. Recursion does happen out here in the world though. A lot. A good example is (a simplified version of) the water cycle: This is a cycle that causes its self to happen again. It is recursive. Another place you can get recursion is in English (and human language in general). You might not recognise it at first, but the way we can generate a sentence is recursive, because the rules allow us to embed one instance of a symbol in side another instance of the same symbol. From Steven Pinker's The Language Instinct: That is a whole sentence that contains other whole sentences: The act of understanding the full sentence involves understanding smaller sentences, which use the same set of mental trickery to be understood as the full sentence. To understand recursion from a programming perspective it's easiest to look at a problem that can be solved with recursion, and understand why it should be and what that means you need to do. For the example I will use the greatest common divisor function, or gcd for short. You have your two numbers You should already be able to see that this is a recursive function, as you have the gcd function calling the gcd function. Just to hammer it home, here it is in c# (again, assuming 0 never gets passed in as a parameter): int gcd(int a, int b) { if (a % b == 0) //this is a stopping condition { return b; } return (gcd(b, a % b)); //the call to gcd here makes this function recursive } In a programme, it is important to have a stopping condition, otherwise you function will recur forever, which will eventually cause a stack overflow! The reason to use recursion here, rather than a while loop or some other iterative construct, is that as you read the code it tells you what it is doing and what will happen next, so it is easier to figure out if it is working correctly. Here is a real world example for recursion. Let them imagine that they have a comic collection and you're going to mix it all up into a big pile. Careful -- if they really have a collection, they may instantly kill you when you just mention the idea to do so. Now let them sort this big unsorted pile of comics with the help of this manual: Manual: How to sort a pile of comics Check the pile if it is already sorted. If it is, then done. As long as there are comics in the pile, put each one on another pile, ordered from left to right in ascending order: If your current pile contains different comics, pile them by comic. If not and your current pile contains different years, pile them by year. If not and your current pile contains different tenth digits, pile them by this digit: Issue 1 to 9, 10 to 19, and so on. If not then "pile" them by issue number. Refer to the "Manual: How to sort a pile of comics" to separately sort each of the new piles. Collect the piles back to a big pile from left to right. Done. The nice thing here is: When they are down to single issues, they have the full "stack frame" with the local piles visible before them on the ground. Give them multiple printouts of the manual and put one aside each pile level with a mark where you currently are on this level (ie. the state of the local variables), so you can continue there on each Done. That's what recursion basically is about: Performing the very same process, just on a finer detail level the more you go into it. Recursion is a very concise way to express something that has to be repeated until something is reached. Not plain english, not really real life examples, but two ways of learning recursion by playing: A nice explanaition of recursion is literally 'an action that reoccurs from within itself". Consider a painter painting a wall, it's recursive because the action is "paint a strip from ceiling to floor than scoot over a little to the right and (paint a strip from ceiling to floor than scoot over a little to the right and (paint a strip from ceiling to floor than scoot over a little to the right and ( etc )))". His paint() function calls itself over and over again to make up his bigger paint_wall() function. Hopefully this poor painter has some kind of stop condition :) protected by Josh K Feb 15 '11 at 13:45 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. Would you like to answer one of these unanswered questions instead?
___ _ __ _______ ___ / _ \ '_ \_ / _ \/ _ \ | __/ |_) / / __/ __/ \___| .__/___\___|\___| |_| Q&A for professional and enthusiast programmers Q&A for computer enthusiasts and power users Q&A for Ubuntu users and developers Q&A for linguists, etymologists, and serious English language enthusiasts Q&A for user experience researchers and experts Q&A for meta-discussion of the Stack Exchange family of Q&A websites Q&A for professional system and network administrators Q&A for professional programmers interested in conceptual questions about software development
Hub49 logiciel de calcul de surfaces Bonjour, J'ai vu sur Framasoft pléthore de logiciels, mais lequel pourrait convenir selon vous pour mon activité de couvreur ? Je dois en effet faire face parfois à des formes de toits complexes et faire un métrage est hasardeux parfois... Si il existe un logiciel qui à partir de deux trois points peut faire cela, je suis preneur (j'ai vu calgeo, en attentant). Merci à tous Vieux PC portable Amilo 1640 fujitsu-siemens et un joli Ubuntu 12.04, une souris, un clavier... Hors ligne JBF Re : logiciel de calcul de surfaces Bonjour, Complexe ? C'est à dire ? Est-ce que tu as autre chose qu'un assemblage de rectangles, de triangles et de trapèzes ? JBF LibreOffice : http://fr.libreoffice.org/ (téléchargement, documentation, FAQ, assistance, contribuer, ...) Hors ligne Hub49 Re : logiciel de calcul de surfaces Vu comme ça, non, finalement je n'ai pas autre chose. Tu as une idée ? Vieux PC portable Amilo 1640 fujitsu-siemens et un joli Ubuntu 12.04, une souris, un clavier... Hors ligne pingouinux Re : logiciel de calcul de surfaces Bonjour, Sous quelle forme les données sont-elles fournies ? Hors ligne JBF Re : logiciel de calcul de surfaces Vu comme ça, non, finalement je n'ai pas autre chose. Tu as une idée ? Oui, je ferais ça avec un tableur avec une ligne ou une colonne pour chaque élément. Pour chaque élément il faut donner les dimensions qui permettent de calculer sa surface. Il suffit ensuite d'avoir une case dans cette ligne ou colonne avec la formule idoine. Et enfin une cellule à part qui fait le total général. Selon la façon la plus pratique de prendre les mesures de chaque type d'élément, il peut être nécessaire d'ajouter des cellules intermédiaires. Par exemple, si pour un rectangle tu levais les coordonnées des sommets (ça me surprendrait, mais je ne suis pas couvreur), il faut un peu de calcul intermédiaire pour déterminer les longueurs des cotés. JBF LibreOffice : http://fr.libreoffice.org/ (téléchargement, documentation, FAQ, assistance, contribuer, ...) Hors ligne Hub49 Re : logiciel de calcul de surfaces mais n'étant pas bon avec un tableur, je me compliquerais la tâche. Il existe un logiciel dédié à la couverture qui s'appelle roof-it, mais bon, vu le prix, je ne peux pas me le permettre, pour l'instant du moins. Il doit bien y avoir d'autres logiciels élémentaires. Vieux PC portable Amilo 1640 fujitsu-siemens et un joli Ubuntu 12.04, une souris, un clavier... Hors ligne pingouinux Re : logiciel de calcul de surfaces Pour connaître l'aire d'une surface délimitée par des segments de droite, il suffit de connaître les coordonnées des sommets (dans l'ordre du contour). J'ai un programme en python qui calcule ça. Hors ligne Hub49 Re : logiciel de calcul de surfaces Aahh tu m’intéresses ! mais comme tu l'as peut-être compris, je ne suis pas doué en informatique, tout ça. Ton programme est-il utilisable à partir d'une simple interface graphique ? Vieux PC portable Amilo 1640 fujitsu-siemens et un joli Ubuntu 12.04, une souris, un clavier... Hors ligne pingouinux Re : logiciel de calcul de surfaces Le programme (ci-dessous) est à lancer dans un terminal. $ cat calcul_aire.py #! /usr/bin/python # -*- coding: utf-8 -*- import sys def calcul(x1,y1,x2,y2) : return .5*(y1+y2)*(x1-x2) k=0; aire=0. with open(sys.argv[1],'r') as f : while True : try : x,y=map(float,f.readline().split()) if k==0 : x0,y0=x,y else : aire += calcul(xav,yav,x,y) xav,yav=x,y print("x=%f y=%f"%(x,y)) except ValueError : break k+=1 aire += calcul(x,y,x0,y0) print("aire=%f"%(aire)) Voici un fichier de données (un carré amputé d'un triangle) : $ cat coord0 01 0.5 .51 10 1 Et voici comment on l'utilise $ ./calcul_aire.py coord x=0.000000 y=0.000000 x=1.000000 y=0.000000 x=0.500000 y=0.500000 x=1.000000 y=1.000000 x=0.000000 y=1.000000 x=0.000000 y=0.000000 aire=0.750000 Vois si ça peut t'être utile. Il faudra sans doute adapter un peu le programme pour ton besoin. Hors ligne Hub49 Re : logiciel de calcul de surfaces ça a l'air vachement bien mais pas évident au premier abord. J'ai quelques retours d'erreurs: def calcul(x1,y1,x2,y2) : return .5*(y1+y2)*(x1-x2) bash: Erreur de syntaxe près du symbole inattendu « ( » gil@gil-Amilo-A1640:~$ gil@gil-Amilo-A1640:~$ k=0; aire=0. gil@gil-Amilo-A1640:~$ with open(sys.argv[1],'r') as f : bash: Erreur de syntaxe près du symbole inattendu « ( » gil@gil-Amilo-A1640:~$ while True : > try : > x,y=map(float,f.readline().split()) bash: Erreur de syntaxe près du symbole inattendu « float,f.readline » gil@gil-Amilo-A1640:~$ if k==0 : x0,y0=x,y > else : bash: Erreur de syntaxe près du symbole inattendu « else » gil@gil-Amilo-A1640:~$ aire += calcul(xav,yav,x,y) bash: Erreur de syntaxe près du symbole inattendu « ( » gil@gil-Amilo-A1640:~$ xav,yav=x,y xav,yav=x,y : commande introuvable gil@gil-Amilo-A1640:~$ print("x=%f y=%f"%(x,y)) bash: Erreur de syntaxe près du symbole inattendu « "x=%f y=%f"% » gil@gil-Amilo-A1640:~$ except ValueError : break Le programme 'except' n'est pas encore installé. Vous pouvez l'installer en tapant : sudo apt-get install qmail gil@gil-Amilo-A1640:~$ k+=1 gil@gil-Amilo-A1640:~$ aire += calcul(x,y,x0,y0) bash: Erreur de syntaxe près du symbole inattendu « ( » gil@gil-Amilo-A1640:~$ Vieux PC portable Amilo 1640 fujitsu-siemens et un joli Ubuntu 12.04, une souris, un clavier... Hors ligne pingouinux Re : logiciel de calcul de surfaces Il ne faut pas taper toutes les lignes du programme dans le terminal, mais générer le script calcul_aire.py avec un éditeur de texte. Ce script contient toutes les lignes à partir de#! /usr/bin/python jusqu'àprint("aire=%f"%(aire)) Il faut ensuite donner la permission d'exécution au fichier : chmod 700 calcul_aire.py On crée un fichier de coordonnées, par exemple coord Et on lance le prgramme ainsi ./calcul_aire.py coord Dernière modification par pingouinux (Le 01/02/2013, à 17:59) Hors ligne
SkippedResults When querying RavenDB, you sometimes get a result that contained skipped results. What are those? And why do we care? Let us assume that we have the following index: from img in docs.Images from tag in img.Tags select new { tag } And you issue the following query: /indexes/ImagesByTag?query=tag:NoSQL Each image may have multiple tags, so it may have multiple results in the index. Here is an example of the actual physical index structure: { "__document_id": "imgs/1", "tag": "RavenDB" } { "__document_id": "imgs/1", "tag": "NoSql" } { "__document_id": "imgs/2", "tag": "NoSQL" } { "__document_id": "imgs/2", "tag": "NoSql" } { "__document_id": "imgs/3", "tag": "Databases" } As you can see, we have several documents that contains multiple results for the same document. Now, the query above is going to return the follow results from the index: { "__document_id": "imgs/1", "tag": "NoSql" } { "__document_id": "imgs/2", "tag": "NoSQL" } { "__document_id": "imgs/2", "tag": "NoSql" } Note that imgs/2 appears twice in the result set, however, when we are querying for documents, there isn't really a point in returning the same document twice (and it drastically increase the response size), so we filter it out and return each document only once. When SkippedResults is greater than 0 it implies that we skipped over some results in the index because they represent a document that we already load. We have to report this information to the client, because it is an important factor when paging. You starting point is (pageSize * currentPage + SkippedResults), not just (pageSize * currentPage).
We have two applications that are both running on Google App Engine. App1 makes requests to app2 as an authenticated user. The authentication works by requesting an authentication token from Google ClientLogin that is exchanged for a cookie. The cookie is then used for subsequent requests (as described here). App1 runs the following code: class AuthConnection: def __init__(self): self.cookie_jar = cookielib.CookieJar() self.opener = urllib2.OpenerDirector() self.opener.add_handler(urllib2.ProxyHandler()) self.opener.add_handler(urllib2.UnknownHandler()) self.opener.add_handler(urllib2.HTTPHandler()) self.opener.add_handler(urllib2.HTTPRedirectHandler()) self.opener.add_handler(urllib2.HTTPDefaultErrorHandler()) self.opener.add_handler(urllib2.HTTPSHandler()) self.opener.add_handler(urllib2.HTTPErrorProcessor()) self.opener.add_handler(urllib2.HTTPCookieProcessor(self.cookie_jar)) self.headers = {'User-Agent': 'Mozilla/5.0 (Windows; U; ' +\ 'Windows NT 6.1; en-US; rv:1.9.1.2) ' +\ 'Gecko/20090729 Firefox/3.5.2 ' +\ '(.NET CLR 3.5.30729)' } def fetch(self, url, method, payload=None): self.__updateJar(url) request = urllib2.Request(url) request.get_method = lambda: method for key, value in self.headers.iteritems(): request.add_header(key, value) response = self.opener.open(request) return response.read() def __updateJar(self, url): cache = memcache.Client() cookie = cache.get('auth_cookie') if cookie: self.cookie_jar.set_cookie(cookie) else: cookie = self.__retrieveCookie(url=url) cache.set('auth_cookie', cookie, 5000) def __getCookie(self, url): auth_url = 'https://www.google.com/accounts/ClientLogin' auth_data = urllib.urlencode({'Email': USER_NAME, 'Passwd': PASSPHRASE, 'service': 'ah', 'source': 'app1', 'accountType': 'HOSTED_OR_GOOGLE' }) auth_request = urllib2.Request(auth_url, data=auth_data) auth_response_body = self.opener.open(auth_request).read() auth_response_dict = dict(x.split('=') for x in auth_response_body.split('\n') if x) cookie_args = {} cookie_args['continue'] = url cookie_args['auth'] = auth_response_dict['Auth'] cookie_url = 'https://%s/_ah/login?%s' %\ ('app2.appspot.com', (urllib.urlencode(cookie_args))) cookie_request = urllib2.Request(cookie_url) for key, value in self.headers.iteritems(): cookie_request.add_header(key, value) try: self.opener.open(cookie_request) except: pass for cookie in self.cookie_jar: if cookie.domain == 'app2domain': return cookie For 10-30% of the requests a DownloadError is raised: Error fetching https://app2/Resource Traceback (most recent call last): File "/base/data/home/apps/app1/5.344034030246386521/source/main/connection/authenticate.py", line 112, in fetch response = self.opener.open(request) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 381, in open response = self._open(req, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 399, in _open '_open', req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1115, in https_open return self.do_open(httplib.HTTPSConnection, req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1080, in do_open r = h.getresponse() File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 197, in getresponse self._allow_truncated, self._follow_redirects) File "/base/data/home/apps/app1/5.344034030246386521/source/main/connection/monkeypatch_urlfetch_deadline.py", line 18, in new_fetch follow_redirects, deadline, *args, **kwargs) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 241, in fetch return rpc.get_result() File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 501, in get_result return self.__get_result_hook(self) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 325, in _get_fetch_result raise DownloadError(str(err)) DownloadError: ApplicationError: 2 The request logs for app2 (the "server") seem fine, as expected (according to the docs DownloadError is only raised if there was no valid HTTP response). Why is the exception raised?
I was trying to port forward my minecraft server, using port 25566, and for some reason it wasn't working. So, I opened minecraft to enter localhost, and after clicking login, the window closed and placed an error log report on my desktop. The same thing happens with all subsequent tries. Here is the error report: # # A fatal error has been detected by the Java Runtime Environment: # # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x690ba2a1, pid=3152, tid=3372 # # JRE version: 6.0_26-b03 # Java VM: Java HotSpot(TM) Client VM (20.1-b02 mixed mode windows-x86 ) # Problematic frame: # C [atioglxx.dll+0x8a2a1] # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # --------------- T H R E A D --------------- Current thread (0x0200cc00): JavaThread "Minecraft main thread" daemon [_thread_in_native, id=3372, stack(0x4ef10000,0x4ef60000)] siginfo: ExceptionCode=0xc0000005, reading address 0x00000b54 Registers: EAX=0x00000000, EBX=0x4f31c778, ECX=0x00000000, EDX=0x00000000 ESP=0x4ef548a0, EBP=0x4ef561cc, ESI=0x00000000, EDI=0x00000001 EIP=0x690ba2a1, EFLAGS=0x00010212 Top of Stack: (sp=0x4ef548a0) 0x4ef548a0: 00000000 00000000 00000000 4f31c778 0x4ef548b0: 00000000 00000000 00000000 00000000 0x4ef548c0: 00000000 00000000 00000000 00000000 0x4ef548d0: 00000000 00000000 000000f5 00000000 0x4ef548e0: 00000000 00000000 00000000 00000000 0x4ef548f0: 00000000 00000000 00000000 00000000 0x4ef54900: 00000000 00000001 00000000 00000000 0x4ef54910: 00000000 00000000 4f1e9be8 0000000e Instructions: (pc=0x690ba2a1) 0x690ba281: 14 8d 43 10 50 8d 4c 24 1c 51 56 57 8d 7c 24 50 0x690ba291: e8 0a 21 04 00 8b 53 10 bf 01 00 00 00 89 43 08 0x690ba2a1: 39 ba 54 0b 00 00 0f 85 85 00 00 00 68 f8 bc f0 0x690ba2b1: 69 e8 07 0b d6 00 83 c4 04 85 c0 74 74 be bc 56 Register to memory mapping: EAX=0x00000000 is an unknown value EBX=0x4f31c778 is an unknown value ECX=0x00000000 is an unknown value EDX=0x00000000 is an unknown value ESP=0x4ef548a0 is pointing into the stack for thread: 0x0200cc00 EBP=0x4ef561cc is pointing into the stack for thread: 0x0200cc00 ESI=0x00000000 is an unknown value EDI=0x00000001 is an unknown value Stack: [0x4ef10000,0x4ef60000], sp=0x4ef548a0, free space=274k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [atioglxx.dll+0x8a2a1] DrvPresentBuffers+0x3e821 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(JIILorg/lwjgl/opengl/PixelFormat;Ljava/nio/IntBuffer;ZZZZ)I+0 j org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(JIILorg/lwjgl/opengl/PixelFormat;Ljava/nio/IntBuffer;ZZZZ)I+15 j org.lwjgl.opengl.WindowsDisplay.createWindow(Lorg/lwjgl/opengl/DisplayMode;Ljava/awt/Canvas;II)V+176 j org.lwjgl.opengl.Display.createWindow()V+68 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;Lorg/lwjgl/opengl/Drawable;Lorg/lwjgl/opengl/ContextAttribs;)V+63 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;)V+9 j org.lwjgl.opengl.Display.create()V+13 j net.minecraft.client.Minecraft.a()V+135 j net.minecraft.client.Minecraft.run()V+6 j java.lang.Thread.run()V+11 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( => current thread ) =>0x0200cc00 JavaThread "Minecraft main thread" daemon [_thread_in_native, id=3372, stack(0x4ef10000,0x4ef60000)] 0x0200d400 JavaThread "Timer hack thread" daemon [_thread_blocked, id=3368, stack(0x4ee80000,0x4eed0000)] 0x0200a800 JavaThread "Keep-Alive-Timer" daemon [_thread_blocked, id=3356, stack(0x4d490000,0x4d4e0000)] 0x0200c800 JavaThread "D3D Screen Updater" daemon [_thread_blocked, id=3256, stack(0x4e920000,0x4e970000)] 0x0200c000 JavaThread "DestroyJavaVM" [_thread_blocked, id=3160, stack(0x00360000,0x003b0000)] 0x0200b400 JavaThread "TimerQueue" daemon [_thread_blocked, id=3252, stack(0x4d650000,0x4d6a0000)] 0x0200b000 JavaThread "AWT-EventQueue-0" [_thread_blocked, id=3236, stack(0x4d520000,0x4d570000)] 0x0200a000 JavaThread "AWT-Windows" daemon [_thread_in_native, id=3204, stack(0x4ab70000,0x4abc0000)] 0x02009c00 JavaThread "AWT-Shutdown" [_thread_blocked, id=3200, stack(0x4aa00000,0x4aa50000)] 0x02009400 JavaThread "Java2D Disposer" daemon [_thread_blocked, id=3196, stack(0x4a940000,0x4a990000)] 0x02009000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=3188, stack(0x4a620000,0x4a670000)] 0x02015800 JavaThread "C1 CompilerThread0" daemon [_thread_blocked, id=3184, stack(0x4a590000,0x4a5e0000)] 0x02008800 JavaThread "Attach Listener" daemon [_thread_blocked, id=3180, stack(0x4a500000,0x4a550000)] 0x02008400 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=3176, stack(0x4a470000,0x4a4c0000)] 0x01fdf000 JavaThread "Finalizer" daemon [_thread_blocked, id=3172, stack(0x4a3e0000,0x4a430000)] 0x01fda400 JavaThread "Reference Handler" daemon [_thread_blocked, id=3168, stack(0x4a350000,0x4a3a0000)] Other Threads: 0x01fd5c00 VMThread [stack: 0x4a2c0000,0x4a310000] [id=3164] 0x0203c000 WatcherThread [stack: 0x4a6b0000,0x4a700000] [id=3192] VM state:not at safepoint (normal execution) This error has happened to other people before, but I haven't been able to find an actual answer to the problem. Minecraft + java have been reinstalled in all possible ways, video card drivers(radeon x600) have been changed, updated, everything. I have 4 gigabytes of ddr2 RAM and a pentium d 820, and windows 7 ultimate 64-bit. The answers in the other question like this on this site don't help at all. I say I was fiddling with minecraft server because that's the only thing I did that could have done anything to break minecraft. I was able to play minecraft beforehand. Minecraft for free and in-browser minecraft on minecraft.net don't work either. Minecraft server works perfectly though.
def swap(ary,idx1,idx2): tmp = ary[idx1] ary[idx1] = ary[idx2] ary[idx2] = tmp def mkranks(size): tmp = [] for i in range(1, size + 1): tmp = tmp + [i] return tmp def permutations(ordered, movements): size = len(ordered) for i in range(1, size): # The leftmost one never moves for j in range(0, int(movements[i])): swap(ordered, i-j, i-j-1) return ordered numberofcases = input() for i in range(0, numberofcases): sizeofcase = input() tmp = raw_input() movements = "" for i in range(0, len(tmp)): if i % 2 != 1: movements = movements + tmp[i] ordered = mkranks(sizeofcase) ordered = permutations(ordered, movements) output = "" for i in range(0, sizeofcase - 1): output = output + str(ordered[i]) + " " output = output + str(ordered[sizeofcase - 1]) print output Having made your code a bit more Pythonic (but without altering its flow/algorithm): def swap(ary, idx1, idx2): ary[idx1], ary[idx2] = [ary[i] for i in (idx2, idx1)] def permutations(ordered, movements): size = len(ordered) for i in range(1, len(ordered)): for j in range(movements[i]): swap(ordered, i-j, i-j-1) return ordered numberofcases = input() for i in range(numberofcases): sizeofcase = input() movements = [int(s) for s in raw_input().split()] ordered = [str(i) for i in range(1, sizeofcase+1)] ordered = permutations(ordered, movements) output = " ".join(ordered) print output I see it runs correctly in the sample case given at the SPOJ URL you indicate. What is your failing case?
MEH-TECH Re : [Support] Team Fortress 2 Je me disais aussi que c’était du pipo... Je me connecterai dans la soirée alors pour le récupérer Merci Hors ligne MEH-TECH Re : [Support] Team Fortress 2 C'est bon je l'ai reçu Hors ligne kurapika29 Re : [Support] Team Fortress 2 Bien le bonjour, chez moi depuis le début TF2 est injouable, mais vraiment, il me laisse même pas lancer une partie c'est pour dire J'ai cliqué partout, essayé le training, pareil le son fait un "sale" loop est paf ça fait des chocapic, enfin le jeu se coupe quoi Et j'ai lancé steam par le terminal pour voir et j'ai eu le droit à ça Game update: AppID 520 "Team Fortress 2 Beta", ProcID 1917, IP 0.0.0.0:0 ERROR: ld.so: object 'gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded: ignored. ERROR: ld.so: object 'gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded: ignored. ERROR: ld.so: object 'gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded: ignored. ERROR: ld.so: object 'gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded: ignored. ERROR: ld.so: object 'gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded: ignored. ERROR: ld.so: object 'gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded: ignored. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (steam:1215): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. SDL video target is 'x11' SDL video target is 'x11' SDL failed to create GL compatibility profile (whichProfile=0! This system supports the OpenGL extension GL_EXT_framebuffer_object. This system supports the OpenGL extension GL_EXT_framebuffer_blit. This system supports the OpenGL extension GL_EXT_framebuffer_multisample. This system DOES NOT support the OpenGL extension GL_APPLE_fence. This system supports the OpenGL extension GL_NV_fence. This system supports the OpenGL extension GL_ARB_sync. This system supports the OpenGL extension GL_EXT_draw_buffers2. This system supports the OpenGL extension GL_EXT_bindable_uniform. This system DOES NOT support the OpenGL extension GL_APPLE_flush_buffer_range. This system supports the OpenGL extension GL_ARB_map_buffer_range. This system supports the OpenGL extension GL_ARB_vertex_buffer_object. This system supports the OpenGL extension GL_ARB_occlusion_query. This system DOES NOT support the OpenGL extension GL_APPLE_texture_range. This system DOES NOT support the OpenGL extension GL_APPLE_client_storage. This system DOES NOT support the OpenGL extension GL_ARB_uniform_buffer. This system supports the OpenGL extension GL_ARB_vertex_array_bgra. This system supports the OpenGL extension GL_EXT_vertex_array_bgra. This system supports the OpenGL extension GL_ARB_framebuffer_object. This system DOES NOT support the OpenGL extension GL_GREMEDY_string_marker. This system supports the OpenGL extension GL_ARB_debug_output. This system supports the OpenGL extension GL_EXT_direct_state_access. This system DOES NOT support the OpenGL extension GL_NV_bindless_texture. This system DOES NOT support the OpenGL extension GL_AMD_pinned_memory. This system supports the OpenGL extension GL_EXT_framebuffer_multisample_blit_scaled. This system supports the OpenGL extension GL_EXT_texture_sRGB_decode. This system supports the OpenGL extension GL_NVX_gpu_memory_info. This system DOES NOT support the OpenGL extension GL_ATI_meminfo. This system supports the OpenGL extension GL_EXT_texture_compression_s3tc. This system supports the OpenGL extension GLX_EXT_swap_control_tear. GL_NV_bindless_texture: DISABLED GL_AMD_pinned_memory: DISABLED GL_EXT_texture_sRGB_decode: AVAILABLE saving roaming config store to 'sharedconfig.vdf' roaming config store 2 saved successfully Installing breakpad exception handler for appid(gameoverlayui)/version(20130215122411_client) Installing breakpad exception handler for appid(gameoverlayui)/version(1.0_client) Installing breakpad exception handler for appid(gameoverlayui)/version(1.0_client) Installing breakpad exception handler for appid(gameoverlayui)/version(1.0_client) [0216/113418:WARNING:proxy_service.cc(646)] PAC support disabled because there is no system implementation Using breakpad crash handler Setting breakpad minidump AppID = 520 Forcing breakpad minidump interfaces to load Looking up breakpad interfaces from steamclient Calling BreakpadMiniDumpSystemInit Installing breakpad exception handler for appid(520)/version(5140_client) Looking up breakpad interfaces from steamclient Calling BreakpadMiniDumpSystemInit Steam_SetMinidumpSteamID: Caching Steam ID: 76561197974056719 [API loaded yes] Steam_SetMinidumpSteamID: Setting Steam ID: 76561197974056719 ConVarRef m_rawinput doesn't point to an existing ConVar GL_NVX_gpu_memory_info: AVAILABLE GL_ATI_meminfo: UNAVAILABLE GL_NVX_gpu_memory_info: Total Dedicated: 524288, Total Avail: 524288, Current Avail: 432184 GL_MAX_SAMPLES_EXT: 16 (process:1925): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. Some features may not be available. CShaderDeviceMgrBase::GetRecommendedConfigurationInfo: CPU speed: 2000 MHz, Processor: GenuineIntel GlobalMemoryStatus: 4110417920 CShaderDeviceMgrBase::GetRecommendedConfigurationInfo: CPU speed: 2000 MHz, Processor: GenuineIntel GlobalMemoryStatus: 4110417920 [0216/113420:WARNING:proxy_service.cc(646)] PAC support disabled because there is no system implementation IDirect3DDevice9::Create: BackBufWidth: 1280, BackBufHeight: 800, D3DFMT: 3, BackBufCount: 1, MultisampleType: 0, MultisampleQuality: 0 Installing breakpad exception handler for appid(steam)/version(1360966495_client) Installing breakpad exception handler for appid(steam)/version(1360966495_client) Loaded program cache file "glbaseshaders.cfg", total keyvalues: 266, total successfully linked: 266 Loaded program cache file "glshaders.cfg", total keyvalues: 273, total successfully linked: 273 Precache: Took 2408 ms, Vertex 865, Pixel 1401 Game.so loaded for "Team Fortress" Uploading dump (in-process) [proxy ''] /tmp/dumps/crash_20130216113459_1.dmp success = yes response: CrashID=bp-109788b5-9070-410f-85ae-727c12130216 Steam: An X Error occurred X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 40 (X_TranslateCoords) Resource id in failed request: 0xaa4d55 Serial number of failed request: 34100 xerror_handler: X failed, continuing Game removed: AppID 520 "Team Fortress 2 Beta", ProcID 1925 saving roaming config store to 'sharedconfig.vdf' roaming config store 2 saved successfully Dernière modification par kurapika29 (Le 23/02/2013, à 12:09) Disponible sur IRC, sur le serveur irc.freenode.net salon ##ubuntu-voyager (et aussi sur plein d'autre serveur/salon) Venez si vous avec besoin d'aide ou pour causer ;) suffit d'avoir Xchat ou un autre client IRC Où sinon en cliquant sur se lien http://kiwiirc.com/client/irc.freenode. … tu-voyager Hors ligne abelthorne Re : [Support] Team Fortress 2 La première chose à faire, ce serait quand même de dire quel type de puce graphique tu as dans ton PC et quel pilote tu utilises. Parce que là, à part le fait qu'il y a un plantage du serveur X, on n'a pas vraiment d'info... Hors ligne kurapika29 Re : [Support] Team Fortress 2 Alors j'ai une GeForce 9600GT et le pilote nvidia-experimental-310 si je dis pas de bêtise Disponible sur IRC, sur le serveur irc.freenode.net salon ##ubuntu-voyager (et aussi sur plein d'autre serveur/salon) Venez si vous avec besoin d'aide ou pour causer ;) suffit d'avoir Xchat ou un autre client IRC Où sinon en cliquant sur se lien http://kiwiirc.com/client/irc.freenode. … tu-voyager Hors ligne murder warrior Re : [Support] Team Fortress 2 j'ai aussi un pb avec tf2,je suis en free to play et je n'ai pas accès a certaine map,(comme les jump ou le orange lolz) alors qu'un ami qui n'a que 9h de jeu peu y aller et moi j'en ai 110.si quelqu'un peut m'aider plz,je ne comprend pas ce que j'ai Hors ligne abelthorne Re : [Support] Team Fortress 2 Dans quel contexte est-ce que tu n'y as pas accès ? Tu as une erreur quand le jeu essaie de s'y connecter ? Tu arrives à te connecter à d'autres maps du même serveur ? Hors ligne murder warrior Re : [Support] Team Fortress 2 c'est le serveur qui ne fonctionne pas,peut importe la map,et ça sur la plupart des serveur,ca me met"missing map.....(nom de la map....)disconnecting Hors ligne murder warrior Re : [Support] Team Fortress 2 mes amis me conseillle de réinitialisé le jeu,mais j'ai pas mal d'arme et de truc que j'aime bien,je sais pas si ca va me les supprimé Hors ligne Gwilh Re : [Support] Team Fortress 2 Bonjour à tous ! J'ai fais une réinstall propre de ma bécane cette semaine (passage de Ubuntu 12.04 à 13.04), installé les pilotes proprio nvidia-313, Steam et Team Fortress 2. Le jeu se lance, j'arrive sur l'écran d'accueil, mais impossible de cliquer sur un item pour lancer le jeu. On dirait que je clique dans le vide. Les flèches du clavier fonctionnent, mais impossible d'appuyer sur Entrer ou Espace. Seule la touche F9 fonctionne (pour quitter le jeu). Quelqu'un a-t-il déjà eu un problème similaire ? J'ai installé Portal également, et lui fonctionne nickel. Merci d'avance ! (PS : de tête, je sais plus le modèle de ma CG nVidia. Je sais qu'elle à 1 giga de mémoire et doit avoir 2 ans) Hors ligne kurapika29 Re : [Support] Team Fortress 2 Team Fortress 2 ou la beta car il y a les deux maintenant ? Disponible sur IRC, sur le serveur irc.freenode.net salon ##ubuntu-voyager (et aussi sur plein d'autre serveur/salon) Venez si vous avec besoin d'aide ou pour causer ;) suffit d'avoir Xchat ou un autre client IRC Où sinon en cliquant sur se lien http://kiwiirc.com/client/irc.freenode. … tu-voyager Hors ligne Gwilh Re : [Support] Team Fortress 2 Ah, je ne connaissais pas cette subtilité ! Il me semble que cela soit la bêtâ... Mon client Steam est lui aussi en bêta. Cela doit peut-être venir de là. Je vais vérifier. Hors ligne abelthorne Re : [Support] Team Fortress 2 Il me semble que cela soit la bêtâ... Mon client Steam est lui aussi en bêta. Cela doit peut-être venir de là. Je vais vérifier. Ce n'est pas lié : que tu actives la bêta pour le client Steam ou non, tu peux installer soit "Team Fortress 2", soit "Team Fortress 2 beta" ; les deux sont présents dans ta bibliothèque. Dernière modification par abelthorne (Le 18/05/2013, à 16:50) Hors ligne Diahovez-Ivan Re : [Support] Team Fortress 2 @abelthorne à écrit: Ce n'est pas lié : que tu actives la bêta pour le client Steam ou non, tu peux installer soit "Team Fortress 2", soit "Team Fortress 2 beta" ; les deux sont présents dans ta bibliothèque. Team Fortress 2 beta plante,donc Team Fortress 2". Hors ligne Gwilh Re : [Support] Team Fortress 2 Et donc voilà, problème résolu. En effet, cela venait de la version bêta de TF2. J'ai maintenant la version stable qui fonctionne parfaitement, c'est même hyper fluide. Merci à tous pour votre aide ! Hors ligne The_Dark_Knight Re : [Support] Team Fortress 2 Salut ! J'ai un soucis avec TF2. Config : AMDX4 980, ATI 5800, 8Go ram. ubuntu 12.04 64 bit. J'ai installé Steam, puis le pilote que me conseillait Steam, puis TF2 (après une bonne journée de téléchargement XD) Au lancement tout ce passe bien, seulement après je ne peut pas cliquer. Je vois pas souris, mais elle est sans action. Shift+Tav m'ouvre bien le panneau steam mais je ne peut rien faire dans TF2 à part un bon vieux ALT+F4 Quelqu'un a une idée ? Merci Cordialement, Hors ligne abelthorne Re : [Support] Team Fortress 2 Tu as bien installé TF2 et pas TF2 bêta ? Hors ligne The_Dark_Knight Re : [Support] Team Fortress 2 Bonjour Abelthorne, Non, tu as raison j'ai installé TF2 bêta. Hors ligne abelthorne Re : [Support] Team Fortress 2 Apparemment d'autres ont eu le même genre de problèmes, résolus par l'installation de la version non bêta. Tu es bon pour désinstaller ta version et repasser une journée de téléchargement. Hors ligne kurapika29 Re : [Support] Team Fortress 2 Avec la version normal tu ne devrais plus avoir de soucis Disponible sur IRC, sur le serveur irc.freenode.net salon ##ubuntu-voyager (et aussi sur plein d'autre serveur/salon) Venez si vous avec besoin d'aide ou pour causer ;) suffit d'avoir Xchat ou un autre client IRC Où sinon en cliquant sur se lien http://kiwiirc.com/client/irc.freenode. … tu-voyager Hors ligne The_Dark_Knight Re : [Support] Team Fortress 2 Merci beaucoup de vos réponses rapide **Je vais pleurer un petit maintenant** Hors ligne Morithil Re : [Support] Team Fortress 2 Bonjour, Je vous expose mon problème: J'ai installé ubuntu 13.04 sur le pc de ma copine en dual doot tout marche bien, j'installe steam puis TF2 le jeu marche sans problème (sauf un un petit: mon clavier passe en qwerty ingame mais pas trop grave ça encore). Sauf que je me lasse de unity et je décide de virer ubuntu 13.04 pour passer sous voyager 13.04 (comme sur mon pc). J'installe les drivers prorpio (310) steam et TF2. Sauf que là impossible de jouer une fois connecter à la partie gros lag. Son et image qui saccade énormement. Je test un autre drivers (310 update) pareil. Finalement je les ai tous testé et rien ne marche. Config : Core 2 duo E7300 2.6ghz 4Go Ram nvidia geforce 9600GT Xubuntu 13.04. Voyager 12.10 sur PC de secours (Asus P5LP-LE, Pentium D 2.8 Ghz*2, 4 Go Ram, Chipset intégré GMA950) Hors ligne abelthorne Re : [Support] Team Fortress 2 Tu serais pas sur un PC avec un système Optimus et le jeu utiliserait la puce graphique Intel au lieu de la nVidia ? Hors ligne Diahovez-Ivan Re : [Support] Team Fortress 2 Video, hide Graphiques intégrés grâce à Intel GMA 950 * La vidéo intégrée n'est pas disponible si une carte graphique est installée. Graphiques intégrés grâce à Intel GMA 950 Jusqu'à 256 Mo (avec 512 Mo ou plus de mémoire PC) Également compatible avec les cartes graphiques PCI Express x16* Oui @abelthorne a raison,il faut voir Optimus, http://doc.ubuntu-fr.org/bumblebee Hors ligne Abriel14 Re : [Support] Team Fortress 2 Bonjour a tous, Moi mon problème est lié a ma carte graphique ,c'est une Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller, ça m'affiche d'abord le logo de valve à l'envers sur le tiers de l'écran puis un écran gris et enfin soit le jeu crash, soit ça m'affiche ça ... pourtant lorsque j'ai lancé la commande /usr/lib/nux/unity_support_test -p tout est marqué en vert , est-ce que quelqu'un aurais une solution? Dernière modification par Abriel14 (Le 02/06/2013, à 14:34) Hors ligne
When I type this into the interpreter, calling 'y' seems to invoke the destructor? class SmartPhone: def __del__(self): print "destroyed" y = SmartPhone() y #prints destroyed, why is that? y #object is still there Here is one run, output does not make sense to me. C:\Users\z4>python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> class SmartPhone: ... def __del__(self): ... print "destroyed" ... >>> y = SmartPhone() >>> del y destroyed >>> y = SmartPhone() >>> y <__main__.SmartPhone instance at 0x01A7CBC0> >>> y <__main__.SmartPhone instance at 0x01A7CBC0> >>> y <__main__.SmartPhone instance at 0x01A7CBC0> >>> del y >>> y = SmartPhone() >>> y destroyed <__main__.SmartPhone instance at 0x01A7CB98> >>> and another, calling 'del y' sometimes calls the destructor and sometimes doesn't C:\Users\z4>python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> class SmartPhone: ... def __del__(self): ... print "destroyed" ... >>> >>> y = SmartPhone() >>> >>> y <__main__.SmartPhone instance at 0x01B6CBE8> >>> y <__main__.SmartPhone instance at 0x01B6CBE8> >>> y <__main__.SmartPhone instance at 0x01B6CBE8> >>> del y >>> y Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'y' is not defined >>> y = SmartPhone() >>> y destroyed <__main__.SmartPhone instance at 0x01B6CC38> >>> del y >>> y Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'y' is not defined >>>
In the last article I started a discussion of the Unicode facilities in Python, especially with XML processing in mind. In this article I continue the discussion. I do want to mention that I don't claim these articles to be an exhaustive catalogue of Unicode APIs; I focus on the Unicode APIs I tend to use most in my own XML processing. You should follow up these articles by looking at the further resources I mentioned in the first article. I also want to mention another general principle to keep in mind: if possible, use a Python install compiled to use UCS4 character storage. You can determine when you configure Python before building it whether it stores Unicode characters using (informally) a two-byte or a four-byte encoding, UCS2 or UCS4. UCS2 is the default but you can override this by passing the --enable-unicode=ucs4 flag to configure. UCS4 uses more space to store characters, but there are some problems for XML processing in UCS2, which the Python core team is reluctant to address because the only known fixes would be too much of a burden on performance. Luckily, most distributors have heeded this advice and ship UCS4 builds of Python. In the last article I showed how to manage conversions from strings to Unicode objects. In dealing with XML APIs you often deal with file-like objects (stream objects) as well. Most file systems and stream representations are byte-oriented rather that character-oriented, which means that Unicode must be encoded for file output and that file input must be decoded for interpretation as Unicode. Python provides facilities for wrapping stream objects so that such conversions are largely transparent. Consider the codecs.open function. import codecs f = codecs.open('utf8file.txt', 'w', 'utf-8') f.write(u'abc\u2026') f.close() The first two arguments to codecs.open are just like the arguments to the built-in function open. The third argument is the encoding name. The return value is the open file pointer. You then use the write method, passing in Unicode objects, which are encoded as specified and written to the file. I can't possibly reiterate the distinction between bytes and characters enough. Look closely at what is written to the file in the snippet above. >>> len(u'abc\u2026') 4 >>> There are four characters: three lowercase letters and the horizontal ellipsis symbol. Examine the resulting file. I use hexdump on Linux. There are many similar utilities on all operating systems. $ hexdump -c utf8file.txt 0000000 a b c 342 200 246 0000006 This means that there are six bytes in the file. The first three are as you would expect, and the second three are all used to encode a single Unicode character in UTF-8 form (the bytes are given in octal form above; in hex form they are e2 80 a6). If you were to read this file with a tool that was not aware that this is a UTF-8 encoded file, it might misinterpret the contents, which is a hard problem overall in dealing with encoded files. (See Rick Jelliffe's article, referenced in the sidebar, for more discussion of this issue.) Some encodings have additional details you have to keep in mind. The following code creates a file with the same characters, but encoded in UTF-16. import codecs f = codecs.open('utf16file.txt', 'w', 'utf-16') f.write(u'abc\u2026') f.close() Examine the contents of the resulting file. If you're using hexdump, this time it's actually more useful to use a different (hexadecimally-based) output formatting option. $ hexdump -C utf16file.txt 00000000 ff fe 61 00 62 00 63 00 26 20 |..a.b.c.& | 0000000a There are 10 bytes in this case. In UTF-16 most characters are encoded in two bytes each. The four Unicode characters are encoded into eight bytes, which are the last eight in the file. This leaves the first two bytes unaccounted for. Unicode has a means of flagging encodings in order to specify the order in which the characters should be read from bytes. These flags come in the form of the encoding of a special character code point called "byte order marks" (BOMs). This is necessary in part because different machines use different means of ordering "words" (pairs of consecutive bytes starting at even machine addresses) and "double words" (pairs of consecutive words starting at machine addresses divisible by four). The difference in word order is all that is relevant in the case of UTF-16. If you were to place the latter eight characters from the above example in a file and send it from a machine with one byte ordering to a machine with another type of ordering, programming tools (including Python code) would read the characters backwards, scrambling the contents. Unicode uses BOMs to mark byte order so that machines with different ordering will be able to figure out the right way to read characters. The BOM for UTF-16 comprises the bytes ff and fe, which completes the puzzle of the contents of the file generated in the above example. The relative position of the ff byte signals the least significant position and fe signals the most significant. You can see how this works when looking at the next word 61 00. By following the BOM you can tell that 61 is least significant and 00 is most significant. This happens to be what is called little-endian byte order (which is usual for Intel machines). Many other machines, including Motorola microprocessors, use big-endian byte order, and the order would be reversed in the BOM, as well as in all the other characters. Unicode tools know how to look for and interpret the BOM in files, and the above file contents should be properly interpreted by any UTF-16-aware tool, even in a language other than Python. Deciding which encoding to choose is a very complex issue, although I recomend that you stick to UTF-8 and UTF-16 for uses associated with XML processing. One consideration that might help you choose between these two encodings is that UTF-8 tends to use fewer bytes when encoding text heavy on European and Middle-Eastern characters, and some Asian scripts. UTF-16 tends to use fewer bytes when encoding text heavy in Chinese, Japanese, Korean, Vietnamese (the "CJKV" languages) and the like. You can use codecs.open again for reading the files created above: import codecs f = codecs.open('utf8file.txt', 'r', 'utf-8') u1 = f.read() f.close() f = codecs.open('utf16file.txt', 'r', 'utf-16') u2 = f.read() f.close() assert u1 == u2 Again Python takes care of all the BOM details transparently. codecs.open does the trick for wrapping files, but not other types of stream objects (such as sockets or StringIO string buffers). You can handle these using wrappers you obtain using the codecs.lookup function. In the last article I showed how to use this function to get encoding and decoding routines (the first two items in the returned tuple). import codecs import cStringIO enc, dec, reader, writer = codecs.lookup('utf-8') buffer = cStringIO.StringIO() #Wrap the buffer for automatic encoding wbuffer = writer(buffer) content = u'abc\u2026' wbuffer.write(content) bytes = buffer.getvalue() #Create the buffer afresh, with the bytes written out buffer = cStringIO.StringIO(bytes) #Wrap the buffer for automatic decoding rbuffer = reader(buffer) content = rbuffer.read() print repr(content) In this example I've completed a round trip from a Unicode object to an encoded byte string, which was built using a StringIO object, and back to a Unicode object read in from the byte string. If you need to use one of these functions from codecs.lookup, and don't want to bother with the other three, you can get them directly using the functions codecs.getencoder, codecs.getdeccoder codecs.getreader, and codecs.getwriter. If you need to deal with stream objects you can read and write to without having to close and reopen it (in a database storage scenario, for example), you'll want to look into the class codecs.StreamReaderWriter, which wraps separate codec reader and writer objects to provide a combination object. XML and Python have different means of representing characters according to their Unicode code points. You have seen the horizontal ellipsis character above in Python Unicode form \u2026 where the "2026" is the character ordinal in hexadecimal. This is a 16-bit Python Unicode character escape. You can also use a 32-bit escape, marked by a capital "U", \U00002026. In XML you either use a decimal character escape format, &#8230;, where "8230" is just hex "2026" in decimal, or you can use hex directly: &#x2026;. Notice the added "x". In XML you would use these escapes when you are using an encoding that does not allow you to enter a character literally. As an example, XML allows you to include an ellipsis character even in a document that is encoded in plain ASCII, as illustrated in the example in listing 1. Since there is no way to express the character with code point 2026 in ASCII, I use a character escape. A conforming XML application must be able to handle this document, reporting the right Unicode for the escaped character (and this is another good test for conformance of your tools). Listing 1: XML file in ASCII encoding that uses a high character <?xml version='1.0' encoding='us-ascii'?> <doc>abc&#x2026;</doc> Python can take care of such escaping for you. If you want to write out XML text, and you're using an encoding—ASCII, ISO-8859-1, EUC-JP, cp1252—that may not include all valid XML characters, you can use a special ability of Python codecs to specify special actions on encoding errors. >>> import codecs >>> enc = codecs.getencoder('us-ascii') >>> print enc(u'abc\u2026')[0] Traceback (most recent call last): File "<stdin>", line 1, in ? UnicodeEncodeError: 'ascii' codec can't encode character u'\u2026' in position 3: ordinal not in range(128) You can avoid this error by specifying 'xmlcharrefreplace' as the error handler. >>> print enc(u'abc\u2026', 'xmlcharrefreplace')[0] abc&#8230; More Unicode Resources for There are other available error handlers, but they are not as interesting for XML processing. Let me reiterate that for each of the areas of interest I've covered in Python's Unicode support, there are additional nuances and possibilities that you might find useful. I've generally restricted the discussion to techniques that I have found useful when processing XML, and you should read and explore further in order to uncover even more Unicode secrets. Let me also say that even though some of the techniques I've gone over will enable you to generate correct XML, there is more to well-formedness than just getting the Unicode character model right. For example, there are some Unicode characters that are not allowed in XML documents, even in escaped form. I still recommend that you use one of the many tools I've discussed in this column for generating XML output. It's quiet time again in the Python-XML community. I did present some code snippets for reading a directory subtree and generating an XML representation, see "XML recursive directory listing, part 2", as well as some Python/Amara equivalents of XQuery and XSLT 2.0 code. There has also been a lot of buzz about Google Sitemaps (currently in beta). Web site owners can create an XML representation of their site, including indicators of updated content. The Google crawlers then use this information to improve coverage of the indexed Web sites. The relevance to this column is that Google has developed sitemap_gen.py, a Python script that "analyzes your web server and generates one or more Sitemap files. These files are XML listings of content you make available on your web server. The files can then be directly submitted to Google." The code uses plain byte string buffer write operations to generate XML. I don't recommend this practice in general, but it seems that the subset of data the Google script includes in the XML file (URLs and last modified dates) is safely in the ASCII subset. (Although as IRIs become more prevalent, this assumption may prove fragile.) It also uses xml.sax and minidom to read XML (mostly the config files in the former case and examples for testing in the latter). XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
You can fix this by finding out what exactly causes the "pygame parachute". Your first step to find this out is to add detailed logging to your code. Start off with poor man's print() logging: print("importing pygame") import pygame print("initialising pygame") pygame.init() ... With that sort of code you might get an error message like initialising pygame Fatal Python error: (pygame parachute) Segmentation Fault Aborted which helps you to isolate where the problem happened. Second, cx_freeze is rather tricky. cxfreeze game.py might not be enough to get a working executable. It is a far better practice to set up a setup.py file. See the source code of the Fabula setup file as an example: http://code.ohloh.net/file?fid=pqI6bOZhHyHchm4NLxcwxsSxVdA&cid=S_RYKZr4PdY
Here's a list of non-LISP languages that allow for macros (including those mentioned in other answers) -- the links are to an explanations of the respective macro systems: Languages that have features that are kind of like macros, or which accomplish more or less the same thing in different ways (namely Smalltalk and Io)): F# Smalltalk Io (Also homoiconic) D We should note that being natural-language-like and allowing for clear, easily intelligible expression do not always go hand in hand (especially since the latter quality is heavily dependent upon how one trains their ways of thinking and reading). In your python example, 0 < a < 5 does not accord with natural English. In English, we would have to say 0 is less than a, which is less than five or maybe a is greater than 0 and less than five: i.e., we either need a relative clause that uses 'which' to indicate that a is still the subject of our second qualification, or, if we want to use a single main clause, we need to complicate the expression by using the two different predicates 'greater than' and 'less than'. 0 < a < 5 is especially readable because it gives a clear, uncluttered, concise expression of the relation-concept, not because it is accords with natural language. I'd like to present the Macro system in Prolog, just because I think the language is great for forming clear and illuminating articulations of problems, and I like it. I'll first show how we might write your example in idiomatic Prolog, then I'll show how we can write some rules to get Pythonish syntax for the same expression using the macro system. Here's a flushed out python line elaborating on your example: if 0 < a < 5 and b in list: print ("that is true!") else: print ("that is not true!") In Prolog, we might write the conditional thus[1]: ( 0 < A, A < 5, member(B, List) % If 0 < A and A < 5 and B is member of List -> write('that is true!') % then write ... ; write('that is not true!') % else write ... ) Prefix notation is common for Prolog predicates[2], but the way it gets used is mostly like the prefix notation in predicate logic. If you've learned the latter, Prolog is very clear, but maybe not if not. My understanding is that macro expansions are easy to implement in LISPs because they are homoiconic languages: i.e., in LISP, code is data (a language can, obviously, implement macros without being homoiconic, it's just not as straightforward). Prolog is also homoiconic. Prolog accomplishes macro expansions by preprosseing code and pattern matching all the terms of a program with the predicateterm_expansion/2, and it's derivative, goal_expansion/2. The following bit of code uses operator declarations, op/3 and goal_expansion/2 to get syntax like your Python example. :- op(200, yfx, user:(<.)). :- op(200, xfx, user:(in)). :- op(1000, xfx, user:(and)). goal_expansion(A and B, (A, B)). % Read: if term matches A and B, replace with (A,B) goal_expansion(L <. M <. H, (L < M, M < H)). % `.` added to avoid conflict with the builtin `<` goal_expansion(E in List, member(E, List)). example(A, B, List) :- 0 <. A <. 5 and B in List -> write('This is true'). In action: ?- example(4, b, [a,b,c,d,e]). This is true Footnotes 1: I don't know whether this pattern would ever be useful. I'd probably just write a predicate to take care of the condition: foo(A, B, List) :- % foo(A, B, List) is true if... between(0, 5, A), member(A, List). to be used thus: bar(Var) :- % Var is bar if... <some conditions>, % conditions are true and foo(A, B, List), % foo(A, B, List) is true and ... . % whatever else is true. 2: In Prolog, most of the work is done with predictes Prolog predicates are the (very) rough equivalent of functions in other languages (with the critical proviso that *they are not functions and do not evaluate to their return value). The definition of a function in Python, with the formdef (): corresponds to the definition of a predicate in Prolog, with the form() :- . Predicates are referred to withpredicate_name/arity`.
In PIL (and mostly all softwares/librairies that use libjpeg) the quality setting is use to construct the quantization table (ref.). In libjpeg the quality number "scale" the sample table values (from the JPEG specification Section K.1). In other librairies there's different tables assign to different qualities (ex.: Photoshop, digital camera). So, in others terms, the quality equal to the quantization table, so it's more complex then just a number. If you want to save your modify images with the same "quality", you only need to use the same quantization table. Fortunately, the quantization table is embeded in each JPEG. Unfortunately, it's not possible to specify a quantization table when saving in PIL. cjpeg, a command line utilities that come with libjpeg, can do that. Here's some rough code that save a jpeg with a specified quantization table: from subprocess import Popen, PIPE from PIL import Image, ImageFilter proc = Popen('%s -sample 1x1 -optimize -progressive -qtables %s -outfile %s' % ('path/to/cjpeg', '/path/ta/qtable', 'out.jpg'), shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) P = '6' if im.mode == 'L': P = '5' stdout, stderr = proc.communicate('P%s\n%s %s\n255\n%s' % (P, im.size[0], im.size[1], im.tostring())) You will need to find the way to extract the quantization table from the orginal jpeg. djpeg can do that (part of libjpeg): djpeg -verbose -verbose image.jpg > /dev/null You will need also to find and set the sampling. For more info on that check here. You can also look at test_subsampling UPDATE I did a PIL fork to add the possibility to specify subsampling or quantization tables or both when saving JPEG. You can also specify quality='keep' when saving and the image will be saved with the same quantization tables and subsampling as the original (original need to be a JPEG). There's also some presets (based on Photoshop) that you can pass to quality when saving. My fork. UPDATE 2 My code is now part of Pillow 2.0. So just do: pip install Pillow
Let's try another test. Let's this time try having NISTNet emulate a circuit with 25% datagram loss. To do this we edit our rule, entering 0 into the delay field, entering 25 into the "Drop %" field and again hitting "Update". When we run our ping test it now looks like: ping -c 10 -i 2 -s 1460 192.168.2.1 PING 192.168.2.1 (192.168.2.1): 1460 data bytes 1468 bytes from 192.168.2.1: icmp_seq=0 ttl=254 time=1014.3 ms 1468 bytes from 192.168.2.1: icmp_seq=2 ttl=254 time=1011.1 ms 1468 bytes from 192.168.2.1: icmp_seq=3 ttl=254 time=1004.4 ms 1468 bytes from 192.168.2.1: icmp_seq=4 ttl=254 time=1027.4 ms 1468 bytes from 192.168.2.1: icmp_seq=6 ttl=254 time=1033.9 ms 1468 bytes from 192.168.2.1: icmp_seq=7 ttl=254 time=1056.5 ms 1468 bytes from 192.168.2.1: icmp_seq=9 ttl=254 time=1043.5 ms --- 192.168.2.1 ping statistics --- 10 packets transmitted, 7 packets received, 30% packet loss round-trip min/avg/max = 1004.4/1027.3/1056.5 ms Ten datagrams is too short a test to expect very accurate results for this sort of test, but it's clear that again NISTNet has done what we've asked of it. Finally, let's change our test such that rather than dropping 25% of datagrams, we duplicate them. To do this we zero the drop field, and enter 25 into the "Dup %" field instead, hitting the "Update" button once more to activate the change. Our ping test now looks like: ping -c 10 -i 2 -s 1460 192.168.2.1 PING 192.168.2.1 (192.168.2.1): 1460 data bytes 1468 bytes from 192.168.2.1: icmp_seq=0 ttl=254 time=1097.0 ms 1468 bytes from 192.168.2.1: icmp_seq=0 ttl=254 time=1698.0 ms (DUP!) 1468 bytes from 192.168.2.1: icmp_seq=0 ttl=254 time=1893.2 ms (DUP!) 1468 bytes from 192.168.2.1: icmp_seq=1 ttl=254 time=1013.8 ms 1468 bytes from 192.168.2.1: icmp_seq=2 ttl=254 time=1137.5 ms 1468 bytes from 192.168.2.1: icmp_seq=3 ttl=254 time=1080.7 ms 1468 bytes from 192.168.2.1: icmp_seq=4 ttl=254 time=993.6 ms 1468 bytes from 192.168.2.1: icmp_seq=5 ttl=254 time=993.7 ms 1468 bytes from 192.168.2.1: icmp_seq=6 ttl=254 time=1219.9 ms 1468 bytes from 192.168.2.1: icmp_seq=6 ttl=254 time=1770.3 ms (DUP!) 1468 bytes from 192.168.2.1: icmp_seq=6 ttl=254 time=1828.0 ms (DUP!) 1468 bytes from 192.168.2.1: icmp_seq=7 ttl=254 time=1266.5 ms 1468 bytes from 192.168.2.1: icmp_seq=7 ttl=254 time=1514.3 ms (DUP!) 1468 bytes from 192.168.2.1: icmp_seq=8 ttl=254 time=1041.2 ms 1468 bytes from 192.168.2.1: icmp_seq=9 ttl=254 time=1065.5 ms --- 192.168.2.1 ping statistics --- 10 packets transmitted, 10 packets received, +5 duplicates, 0% packet loss round-trip min/avg/max = 993.6/1307.5/1893.2 ms Magic stuff! You can design tests that combine all of these together, and of course we've only looked at the simplest of capabilities. NISTNet has a whole suite of much more sophisticated things it can do with datagrams. One final thing to look at are the statistics that are collected in real time. Start a ping test and scroll the right-hand panel across to the right-most fields. You will see running tallies of packet sizes, bytes transmitted, average bandwidth, and others. These are useful for keeping an eye on progress of the tests. If you're in a situation where you need to quickly test how a network application or protocol will perform under realistically poor conditions, then NISTNet provides an excellent solution. Mark has gone to a lot of work to ensure that the emulation is as accurate and useful as possible, and I'm sure you'll agree that it truly is one of those tools that you know will come in handy one day; keep it around. Terry Dawson is the author of a number of network-related HOWTO documents for the Linux Documentation Project, a co-author of the 2nd edition of O'Reilly's Linux Network Administrators Guide, and is an active participant in a number of other Linux projects. Read more Linux Network Administration columns. Discuss this article in the O'Reilly Network Linux Forum. Return to the Linux DevCenter.
The general problem I am trying ot debug is why a C program can call a function with no problem, but when the same function is called in Python it causes a segfault. I'm trying to get the Python OpenCV bindings to access an AVT GigE GC1360H Camera using modules/highgui/src/cap_pvapi.cpp. I can read and display frames from the camera flawlessly in C, but when I attempt to read a frame in Python the interpreter segfaults. Calling VideoCapture.open(0) in Python successfully initializes the camera to Mono8 mode, and watching ifconfig shows data coming in from the camera. I can run this same Python code using my v4l webcam instead of the AVT camera and it works fine. I am using OpenCV 2.3.1 on Gentoo Linux 3.2.12 x64. Here is the Python code I am using, running on Python 2.7.2: import cv2 if __name__ == '__main__': cv2.namedWindow("Cam", 1) capture = cv2.VideoCapture() capture.open(0) # This successfully opens the Camera and ifconfig shows data being transferred while True: img = capture.read()[1] # This is where it segfaults cv2.imshow("Cam", img) if cv2.waitKey(10) == 27: break cv2.destroyWindow("Cam") I attached gdb to the Python interpreter and figured out that it segfaults on the call to PvCaptureQueueFrame() inside CvCaptureCAM_PvAPI::grabFrame() in the OpenCV source. Here is the output from that: alex@Wassenberg ~ $ gdb python GNU gdb (Gentoo 7.3.1 p2) 7.3.1 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-pc-linux-gnu". For bug reporting instructions, please see: <http://bugs.gentoo.org/>... Reading symbols from /usr/bin/python...(no debugging symbols found)...done. (gdb) run ./pyview.py Starting program: /usr/bin/python ./pyview.py process 8921 is executing new program: /usr/bin/python2.7 [Thread debugging using libthread_db enabled] [New Thread 0x7fffe3c10700 (LWP 8929)] [New Thread 0x7fffe340f700 (LWP 8930)] [New Thread 0x7fffe2c0e700 (LWP 8931)] [New Thread 0x7fffe240d700 (LWP 8932)] [Thread 0x7fffe240d700 (LWP 8932) exited] [New Thread 0x7fffe240d700 (LWP 8933)] [Thread 0x7fffe240d700 (LWP 8933) exited] [New Thread 0x7fffe240d700 (LWP 8940)] [Thread 0x7fffe240d700 (LWP 8940) exited] [New Thread 0x7fffe240d700 (LWP 8941)] [New Thread 0x7fffe1c0c700 (LWP 8942)] [New Thread 0x7fffe0eb9700 (LWP 8948)] [New Thread 0x7fffdbfff700 (LWP 8949)] Program received signal SIGSEGV, Segmentation fault. 0x00007ffff007c8ea in PvCaptureQueueFrame () from /usr/local/lib/libPvAPI.so (gdb) bt #0 0x00007ffff007c8ea in PvCaptureQueueFrame () from /usr/local/lib/libPvAPI.so #1 0x00007ffff55dd123 in CvCaptureCAM_PvAPI::grabFrame() () from /usr/lib64/libopencv_highgui.so.2.3 #2 0x00007ffff55dde31 in cvGrabFrame () from /usr/lib64/libopencv_highgui.so.2.3 #3 0x00007ffff55dde4d in cv::VideoCapture::grab() () from /usr/lib64/libopencv_highgui.so.2.3 #4 0x00007ffff55ddaf2 in cv::VideoCapture::read(cv::Mat&) () from /usr/lib64/libopencv_highgui.so.2.3 #5 0x00007ffff6b355c5 in ?? () from /usr/lib64/python2.7/site-packages/cv2.so #6 0x00007ffff7afdfdc in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #7 0x00007ffff7aff88d in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0 #8 0x00007ffff7aff9a2 in PyEval_EvalCode () from /usr/lib64/libpython2.7.so.1.0 #9 0x00007ffff7b19afc in ?? () from /usr/lib64/libpython2.7.so.1.0 #10 0x00007ffff7b1a930 in PyRun_FileExFlags () from /usr/lib64/libpython2.7.so.1.0 #11 0x00007ffff7b1b50f in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.7.so.1.0 #12 0x00007ffff7b2c823 in Py_Main () from /usr/lib64/libpython2.7.so.1.0 #13 0x00007ffff74902ad in __libc_start_main () from /lib64/libc.so.6 #14 0x00000000004008a9 in _start () (gdb) Any insight into this problem would be appreciated. Ultimately what this question boils down to is why C can call grabFrame() with no problem but Python segfaults. I am inclined to think the problem is the way that the Python bindings to C are generated, but I am not familiar with how OpenCV does this. Any ideas why grabFrame() and PvCaptureQueueFrame() would work fine in C but not in Python? For reference here is the C program that could successfully read the AVT camera: #include <opencv2/imgproc/imgproc_c.h> #include "opencv2/highgui/highgui.hpp" #include <stdio.h> int main(int argc, char** argv) { printf("Press ESC to exit\n"); cvNamedWindow( "First Example of PVAPI Integrated", CV_WINDOW_AUTOSIZE ); CvCapture* capture = cvCreateCameraCapture( CV_CAP_PVAPI ); assert( capture != NULL ); IplImage* frame; while(1) { frame = cvQueryFrame(capture); if(!frame) break; cvShowImage( "First Example of PVAPI Integrated", frame); char c = cvWaitKey(50); if( c == 27) break; } cvReleaseCapture( &capture ); cvDestroyWindow( "First Example of PVAPI Integrated" ); } Compiled with gcc 4.5.3-r2: gcc -I/usr/include/opencv -o main ./main.c -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
Author Rekrul Submission date 2011-07-07 16:42:47.900396 Rating 6505 Matches played 4267 Win rate 65.29 Use rpsrunner.py to play unranked matches on your computer. import random SIZE = 11 WEIGHT_FACTOR = 4.8 class HistoryNode(object): def __init__(self, parent=None): if parent is not None: self.depth = parent.depth + 1 else: self.depth = 0 self.children = {'RR': None, 'RS': None, 'RP': None, 'SR': None, 'SS': None, 'SP': None, 'PR': None, 'PS': None, 'PP': None} self.distribution = {'RR': 0, 'RS': 0, 'RP': 0, 'SR': 0, 'SS': 0, 'SP': 0, 'PR': 0, 'PS': 0, 'PP': 0} def new_move(self, input): #analyse last move last_move = input[0:2] if len(input) > 2: if self.children[last_move] is None: self.children[last_move] = HistoryNode(self) self.children[last_move].new_move(input[2:]) else: self.distribution[last_move] += 1 def predict(self, input): if len(input) > 0: last_move = input[0:2] if self.children[last_move] is not None: return self.children[last_move].predict(input[2:]) else: return None else: return self.distribution class HistoryTree(object): def __init__(self): self.root = HistoryNode() self.input = '' def new_move(self, move): self.input += move if len(self.input) > SIZE * 2: self.input = self.input[-SIZE * 2:] for i in xrange(2,len(self.input)+1,2): self.root.new_move(self.input[-i:]) def predict(self): results = {'R':0, 'S':0, 'P':0} for i in xrange(2, len(self.input)+1, 2): res = self.root.predict(self.input[-i:]) #print res if res is not None: for key in res: results[key[1]] += res[key] * (WEIGHT_FACTOR**i) d = results e = d.keys() e.sort(cmp=lambda a,b: cmp(d[a],d[b])) return e[-1] if input == '': history_tree = HistoryTree() output = random.choice(["R","P","S"]) prediction = output pred = {'R':'P','S':'R','P':'S'} state = [0,0] counter = 0 win = ['RS', 'SP', 'PR'] lost = ['SR', 'PS', 'RP'] else: move = output + input history_tree.new_move(move) counter += 1 if prediction+input in win: state[0] += 1 elif prediction+input in lost: state[1] += 1 prediction = history_tree.predict() #very simple switching / adapting strategie if counter < 25 or random.random() < 0.5 or state[0] < state[1]*0.95: output = random.choice(["R","P","S"]) else: output = pred[prediction]
Kubke Lab:Research/ABR/Notebook/2013/10/29 From OpenWetWare (→Fabiana: Tested signal filters) (→Feedback from Andy) (5 intermediate revisions not shown.) Line 6: Line 6: | colspan="2"| | colspan="2"| <!-- ##### DO NOT edit above this line unless you know what you are doing. ##### --> <!-- ##### DO NOT edit above this line unless you know what you are doing. ##### --> + + __TOC__ + =General Entries= =General Entries= - * + * Played around with filter() and butter() in library.signal [[User:M Fabiana Kubke|MF Kubke]] 20:11, 28 October 2013 (EDT) + + == Feedback from Andy == + + * Filter: Higher order is sharper cutoff. A sharp butterworth would be a 5th or 6th order. I wouldn't use any higher. + + * Try an fft to see where the low pass cutoff should be. If the filter order is too high you get ringing. [ see http://t.co/QfrGaDjvj2 ] + + * entered wrong sampling rate - sampling rate is actually 50 kHz - need to recalculate the filters based on that. They are off by a factor of 10, so what says lowpassed at 100 is actually at 1k. + =Personal Entries= =Personal Entries= ==Fabiana== ==Fabiana== + * File 2013-10-29-MFK.Rmd <!-- saved from url=(0014)about:internet --> <!-- saved from url=(0014)about:internet --> Current revision Hearing development in barn owls Main project page Previous entry Next entry General Entries Feedback from Andy Personal Entries Fabiana Trying to apply a low pass filter to ABR data Using package signal install.packages("signal") library(signal) Read test file dir() ## [1] "2013-10-27-MFK.Rmd" "2013-10-27.html" "2013-10-27.R" ## [4] "2013-10-27.txt" "2013-10-28-MFK.html" "2013-10-28-MFK.md" ## [7] "2013-10-28-MFK.Rmd" "2013-10-29-MFK.Rmd" "Copy189L0A.ABR" ## [10] "Copy224l0a.txt" "Copy233L0B.ABR.log" "Copy233L0B.ABR.txt" ## [13] "figure" "knitr-testr.html" "knitr-testr.md" ## [16] "knitr-testr.Rmd" "kntR2.html" "kntR2.md" ## [19] "kntR2.Rmd" "my first knitr.html" "my first knitr.md" ## [22] "my first knitr.Rmd" "mydata_new" "mydata_new.txt" ## [25] "readrawabr.R" "Sandbox1.R" "Sandbox1.txt" ## [28] "signal.pdf" "texput.log" mydata <- read.table("Copy233L0B.ABR.txt", header = T) head(mydata) #passed test write.table(mydata, "mydata_new.txt") mydata_new <- read.table("mydata_new", header = T) head(mydata_new) #passed test mydata_new$bincomp[1:1000] <- (mydata_new$bin[1:1000] - (mydata_new$left[1:1000] + mydata_new$right[1:1000])) head(mydata_new) write.table(mydata_new, "mydata_new.txt") #passed test head(mydata_new) Test plotting for comparison plot(mydata_new[c(1, 2)], type = "l", main = "original left") Now - try to apply a low pass filterusing butter() to determine the properties of the filter. bf <- butter(3, 0.1) test1 <- filter(bf, mydata_new$left) par(mfrow = c(1, 2)) plot(mydata_new[c(1, 2)], type = "l", main = "original left") plot(test1, main = "butter(3, 0.1)") Test type = “low” vs type = “high” bf <- butter(3, 0.1, type = "low") test1 <- filter(bf, mydata_new$left) par(mfrow = c(1, 2)) plot(test1, main = "butter type low") bf <- butter(3, 0.1, type = "high") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter type high") (guess default is low then) Try changing filter properties - change filter order (first argument in function) par(mfrow = c(2, 2)) plot(mydata_new[c(1, 2)], type = "l", main = "original left") bf <- butter(1, 0.1, type = "low") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter(1, 0.1)") bf <- butter(3, 0.1, type = "low") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter(3, 0.1)") bf <- butter(15, 0.1, type = "low") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter(30, 0.1)") Not really sure what the order is doing - ???? Stick with filter order == 3, change crirical frequency W. W-> [0-1] where 1 is the Nyquist frequency (½ of the sampling rate ) head(mydata_new) sampling rate => every 0.2 msec () Looks like sampling rate at 5 kHz => Nyquist = 2.5 KHz par(mfrow = c(2, 2)) plot(mydata_new[c(1, 2)], type = "l", main = "original left") bf <- butter(3, 0.01, type = "low") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter(3, 0.01 (25 Hz)") bf <- butter(3, 0.1, type = "low") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter(3, 0.1 (250 Hz))") bf <- butter(3, 1, type = "low") test1 <- filter(bf, mydata_new$left) plot(test1, main = "butter(30, 1 (2.5KHz))") Try the first derivative on the 250 LP bf <- butter(3, 0.1, type = "low") mydata_new$leftlow <- filter(bf, mydata_new$left) par(mfrow = c(1, 3)) plot(mydata_new$msec, mydata_new$left, type = "l", main = "original left") plot(mydata_new$msec, mydata_new$leftlow, type = "l", main = "lowpassed left") mydata_new$diffleft2[2:1000] <- diff(mydata_new$leftlow)/diff(mydata_new$msec) plot(mydata_new$msec, mydata_new$diffleft2, ylim = c(-3000, 3000), type = "l", main = "first diff on lowpassed left") First deriv stil looks noisy - try different filters bf <- butter(3, 0.04, type = "low") mydata_new$leftlow <- filter(bf, mydata_new$left) par(mfrow = c(1, 3)) plot(mydata_new$msec, mydata_new$left, type = "l", main = "original left") plot(mydata_new$msec, mydata_new$leftlow, type = "l", main = "lowpassed 100 left") mydata_new$diffleft2[2:1000] <- diff(mydata_new$leftlow)/diff(mydata_new$msec) plot(mydata_new$msec, mydata_new$diffleft2, ylim = c(-3000, 3000), type = "l", main = "first diff on lowpassed left") par(mfrow = c(1, 1)) plot(mydata_new$msec, mydata_new$diffleft2, ylim = c(-3000, 3000), type = "l", main = "first diff on lowpassed left") Mmmmmmm…. Andy Oris
This question already has an answer here: print difflist for line in difflist: if ((line.startswith('<'))or (line.startswith('>')) or (line.startswith('---'))): difflist.remove(line) print difflist Here, initially, difflist = ['1a2', '> ', '3c4,5', '< staring', '---', '> starring', '> ', '5c7', '< at ', '---', '> add ', ''] And what i expect of the code is to print ['1a2', '3c4,5', '5c7', ''] But what i get instead is difflist= ['1a2', '3c4,5', '---', '> ', '5c7', '---', '']
See Understanding Techniques for WCAG Success Criteria for important information about the usage of these informative techniques and how they relate to the normative WCAG 2.0 success criteria. The Applicability section explains the scope of the technique, and the presence of techniques for a specific technology does not imply that the technology can be used in all situations to create content that meets WCAG 2.0. Adobe Flash Professional version MX and higher Adobe Flex This technique relates to: The objective of this technique is to show how non-text objects in Flash can be marked so that they can be read by assistive technology. The Flash Player supports text alternatives to non-text objects using the name property in the accessibility object, which can be defined in ActionScript or within Flash authoring tools. When an object contains words that are important to understanding the content, the name property should include those words. This will allow the name property to play the same function on the page as the object. Note that it does not necessarily describe the visual characteristics of the object itself but must convey the same meaning as the object. The Flash Professional authoring tool's Accessibility panel lets authors provide accessibility information to assistive technology and set accessibility options for individual Flash objects or entire Flash applications. For a text alternative to be applied to a non-text object, it must be saved as a symbol in the movie's library. Note: Flash does not support text alternatives for graphic symbols. Instead, the graphic must be converted to or stored in a movie clip or button symbol. Bring up the Accessibility panel by selecting "Window > Other Panels > Accessibility" in the application menu, or through the shortcut ALT + F11. Ensure that the 'Make object accessible' checkbox is checked. Select the non-text instance on the movie stage, the fields in the Accessibility panel become editable. Enter a meaningful text alternative in the 'name' field, properly describing the purpose of the symbol. To manage an object's text equivalent programmatically using ActionScript 2, the _accProps object must be used. This references an object containing accessibility related properties set for the object. The code example below shows a simple example of how the _accProps object is used to set an objects name in ActionScript. Example Code: // 'print_btn' is an instance placed on the movie's main timeline _root.print_btn._accProps = new Object(); _root.print_btn._accProps.name = "Print"; To manage an object's text equivalents programmatically using ActionScript 3, the AccessibilityProperties object and name property must be used. The code example below shows a simple example of how the name property is used to set an objects name in ActionScript. Example Code: // 'print_btn' is an instance placed on the movie's main timeline print_btn.accessibilityProperties = new AccessibilityProperties(); print_btn.accessibilityProperties.name = "Print"; Publish the SWF file Open the SWF file in Internet Explorer 6 or higher (using Flash Player 6 or higher), or Firefox 3 or higher (using Flash Player 9 or higher) Use a tool which is capable of showing an object's name text alternative, such as ACTF aDesigner 1.0 to open the Flash movie. In the GUI summary panel, loop over each object which is contained by the Flash movie and ensure the object that was provided a name has a proper name attribute appearing in the tool's display. Authors may also test with a screen reader, by reading the Flash content and listening to hear that the equivalent text is read when tabbing to the non-text object (if it is tabbable) or hearing the alternative text read when reading the content line-by-line. All non-text objects have text equivalents that can serve the same purpose and convey the same information as the non-text object Check #6 is true. If this is a sufficient technique for a success criterion, failing this test procedure does not necessarily mean that the success criterion has not been satisfied in some other way, only that this technique has not been successfully implemented and can not be used to claim conformance.
Alex0000 Déconnexion sur Batterie [Samsung] Bonjour à tous J'ai un problème récurrent avec mon wifi quelque soit l'os ou la version de celui-ci. Après un petit temps sur batterie ( entre 5min et 1h) il se déconnecte de plus le gestionnaire de connexion ne montre plus aucun wifi. Sur Windows, l'outils de résolution de problème redémarre la carte et tout rentre dans l'ordre, si on omet la déconnexion ... Je n'ai pas trouver la commande qui permet de redémarrer la carte, ou une solution durable. Par contre si j'introduits : sudo /etc/init.d/networking restart ça fait tout planter sous 12.10, et sous 11.10 rien ne se passait ... Merci d'avance >> cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.10 DISTRIB_CODENAME=quantal DISTRIB_DESCRIPTION="Ubuntu 12.10" >> lsusb Bus 001 Device 002: ID 0ac8:c33f Z-Star Microelectronics Corp. Webcam Bus 008 Device 002: ID 0a5c:2151 Broadcom Corp. Bluetooth Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub >> lspci -k -nn | grep -A 3 -i net 02:00.0 Network controller [0280]: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Askey Computer Corp. Device [144f:7167] Kernel driver in use: ath9k Kernel modules: ath9k 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Samsung Electronics Co Ltd Device [144d:c060] Kernel driver in use: r8169 Kernel modules: r8169 >> sudo lshw -C network *-network description: Interface réseau sans fil produit: AR9285 Wireless Network Adapter (PCI-Express) fabriquant: Atheros Communications Inc. identifiant matériel: 0 information bus: pci@0000:02:00.0 nom logique: wlan0 version: 01 numéro de série: 00:26:b6:66:d3:d6 bits: 64 bits horloge: 33MHz fonctionnalités: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-19-generic firmware=N/A ip=192.168.1.113 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn ressources: irq:16 mémoire:f6000000-f600ffff *-network description: Ethernet interface produit: RTL8101E/RTL8102E PCI Express Fast Ethernet controller fabriquant: Realtek Semiconductor Co., Ltd. identifiant matériel: 0 information bus: pci@0000:04:00.0 nom logique: eth0 version: 02 numéro de série: 00:24:54:18:83:2b taille: 10Mbit/s capacité: 100Mbit/s bits: 64 bits horloge: 33MHz fonctionnalités: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s ressources: irq:43 portE/S:3000(taille=256) mémoire:f4000000-f4000fff mémoire:f2000000-f200ffff mémoire:f2020000-f203ffff >> lsmod Module Size Used by samsung_laptop 14532 0 rfcomm 46619 12 parport_pc 32688 0 bnep 18140 2 ppdev 17073 0 binfmt_misc 17500 1 snd_hda_codec_hdmi 32007 1 snd_hda_codec_realtek 77876 1 ip6t_REJECT 12574 1 xt_hl 12521 6 ip6t_rt 12558 3 nf_conntrack_ipv6 14054 7 nf_defrag_ipv6 13158 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17349 10 xt_limit 12711 13 xt_tcpudp 12603 18 xt_addrtype 12635 4 xt_state 12578 14 joydev 17457 0 ip6table_filter 12815 1 ip6_tables 27207 2 ip6t_rt,ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12649 0 nf_nat 25254 1 nf_nat_ftp arc4 12529 2 nf_conntrack_ipv4 14480 9 nf_nat nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 nf_conntrack_ftp 13359 1 nf_nat_ftp snd_hda_intel 33491 5 ath9k 131308 0 snd_hda_codec 134212 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel mac80211 539908 1 ath9k i915 520629 3 nf_conntrack 82633 8 nf_conntrack_ipv6,xt_state,nf_conntrack_netbios_ns,nf_conntrack_broadcast,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp snd_hwdep 13602 1 snd_hda_codec snd_pcm 96580 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec ath9k_common 14055 1 ath9k snd_seq_midi 13324 0 drm_kms_helper 46784 1 i915 snd_rawmidi 30512 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61521 2 snd_seq_midi,snd_seq_midi_event coretemp 13400 0 snd_timer 29425 2 snd_pcm,snd_seq ath9k_hw 395218 2 ath9k,ath9k_common snd_seq_device 14497 3 snd_seq_midi,snd_rawmidi,snd_seq drm 275528 4 i915,drm_kms_helper snd 78734 19 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device iptable_filter 12810 1 ath 23827 3 ath9k,ath9k_common,ath9k_hw ip_tables 26995 1 iptable_filter microcode 22803 0 psmouse 95552 0 btusb 22474 0 i2c_algo_bit 13413 1 i915 lp 17759 0 serio_raw 13215 0 lpc_ich 17061 0 video 19335 2 samsung_laptop,i915 soundcore 15047 1 snd cfg80211 206566 3 ath9k,mac80211,ath bluetooth 209199 22 rfcomm,bnep,btusb parport 46345 3 parport_pc,ppdev,lp snd_page_alloc 18484 2 snd_hda_intel,snd_pcm x_tables 29711 13 ip6t_REJECT,xt_hl,ip6t_rt,ipt_REJECT,xt_LOG,xt_limit,xt_tcpudp,xt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables mac_hid 13205 0 r8169 61650 0 >> iwconfig wlan0 IEEE 802.11bgn ESSID:"homell" Mode:Managed Frequency:2.412 GHz Access Point: 00:C0:49:DA:DA:08 Bit Rate=48 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1246 Invalid misc:82226 Missed beacon:0 >> ifconfig -a eth0 Link encap:Ethernet HWaddr 00:24:54:18:83:2b UP BROADCAST MULTICAST MTU:1500 Metric:1 Packets reçus:0 erreurs:0 :0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:0 (0.0 B) Octets transmis:0 (0.0 B) lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:12999 erreurs:0 :0 overruns:0 frame:0 TX packets:12999 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:1715972 (1.7 MB) Octets transmis:1715972 (1.7 MB) wlan0 Link encap:Ethernet HWaddr 00:26:b6:66:d3:d6 inet adr:192.168.1.113 Bcast:192.168.1.255 Masque:255.255.255.0 adr inet6: fe80::226:b6ff:fe66:d3d6/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:636704 erreurs:0 :0 overruns:0 frame:0 TX packets:793819 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:458508453 (458.5 MB) Octets transmis:447953157 (447.9 MB) >> sudo iwlist scan wlan0 Scan completed : Cell 01 - Address: 00:C0:49:DA:DA:08 Channel:1 Frequency:2.412 GHz (Channel 1) Quality=46/70 Signal level=-64 dBm Encryption key:on ESSID:"homell" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 22 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000003723316ba Extra: Last beacon: 20ms ago IE: Unknown: 0006686F6D656C6C IE: Unknown: 010582848B962C IE: Unknown: 030101 IE: Unknown: 0706455520010D14 IE: Unknown: 2A0100 IE: Unknown: 32080C1218243048606C IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK IE: Unknown: DD0A0800280101000200FF0F Cell 02 - Address: 74:EA:3A:C8:2A:BA Channel:1 Frequency:2.412 GHz (Channel 1) Quality=26/70 Signal level=-84 dBm Encryption key:on ESSID:"TP LINK" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 12 Mb/s; 24 Mb/s; 36 Mb/s Bit Rates:9 Mb/s; 18 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000003784ac6f181 Extra: Last beacon: 240ms ago IE: Unknown: 00075450204C494E4B IE: Unknown: 010882848B960C183048 IE: Unknown: 030101 IE: Unknown: 050400010000 IE: Unknown: 2A0100 IE: Unknown: 32041224606C IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : TKIP CCMP Authentication Suites (1) : PSK Preauthentication Supported Cell 03 - Address: 00:0D:93:7E:5A:84 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=33/70 Signal level=-77 dBm Encryption key:on ESSID:"vax" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=0000021d69900189 Extra: Last beacon: 2036ms ago IE: Unknown: 0003766178 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 050401030000 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: Unknown: 32080C1218243048606C IE: Unknown: DD0700039301030000 IE: Unknown: DD06001018020300 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK Cell 04 - Address: 00:30:F1:FB:1D:A0 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=29/70 Signal level=-81 dBm Encryption key:on ESSID:"Kamerdelle 83 (philips)" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 22 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000002849f813122 Extra: Last beacon: 1996ms ago IE: Unknown: 00174B616D657264656C6C6520383320287068696C69707329 IE: Unknown: 010582848B962C IE: Unknown: 030106 IE: Unknown: 2A0100 IE: Unknown: 32080C1218243048606C >> uname -r -m 3.5.0-19-generic x86_64 >> cat /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback >> nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: r8169 State: unavailable Default: no HW Address: 00:24:54:18:83:2B Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 [homell] ------------------------------------------------------ Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 00:26:B6:66:D3:D6 Capabilities: Speed: 48 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) *homell: Infra, 00:C0:49:DA:DA:08, Freq 2412 MHz, Rate 54 Mb/s, Strength 51 WPA TP LINK: Infra, 74:EA:3A:C8:2A:BA, Freq 2412 MHz, Rate 54 Mb/s, Strength 19 WPA2 Kamerdelle 83 (philips): Infra, 00:30:F1:FB:1D:A0, Freq 2437 MHz, Rate 54 Mb/s, Strength 30 WEP vax: Infra, 00:0D:93:7E:5A:84, Freq 2437 MHz, Rate 54 Mb/s, Strength 39 WPA IPv4 Settings: Address: 192.168.1.113 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.20 DNS: 192.168.1.1 DNS: 192.168.1.2 >> sudo rfkill list 0: hci0: Bluetooth Soft blocked: yes Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: samsung-wlan: Wireless LAN Soft blocked: no Hard blocked: no Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] up Hors ligne toutafai Re : Déconnexion sur Batterie [Samsung] Bonsoir, es-ce que ton portable est passé en veille quand il était sur batterie ? cela me fait penser a un pb de gestion d’énergie, regarde dans le bios si tu n'as pas des paramètres a ce niveau la ... la prochaine fois que cela t'arrive, et avant de bidouiller quoique ce soit donne le retour de dmesg |grep -e wlan0 -e wireless - e ath9k cat /var/lib/NetworkManager/NetworkManager.state iwconfig Ubuntu Server 12.04 x32 sur IBM P4. - XP / Seven / Ubuntu 14.04 x64 sur Lenovo ThinkPad. Purée d'unity...difficile de s'y faire - canon MG5350 - Logitech Quickcam 3000 - TNT Intuix S800 - Freebox v6 révolution (Trop de la balle !!!) Utilisateur d'Ubuntu depuis novembre 2006. 23 PC libérés grâce aux OS libres...et de 24 et quel 24 ...car pc au taff et pour le taff ^^ Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Tout d'abord merci d'avoir pris le temps de me répondre. Au niveau de la veille, non ce n'est pas le cas. Ca arrive que je vienne de boot, sortir de veille, hibernation ou danse la polka! Mais je vais checker le bios; je n'y avais pas pensé. Je poste le retour dès qu'il me lache Dernière modification par Alex0000 (Le 16/12/2012, à 00:16) Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Alors voici les réponse des commandes déconnexion 5 minute après le boot: oumpa@lap-oumpa:~$ cat /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true oumpa@lap-oumpa:~$ dmesg |grep -e wlan0 -e wireless - e ath9k (entrée standard):[ 26.150266] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 26.152196] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 28.003299] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 28.011830] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 28.014495] wlan0: authenticated (entrée standard):[ 28.021134] wlan0: associating with AP with corrupt beacon (entrée standard):[ 28.024084] wlan0: associate with 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 28.030491] wlan0: RX AssocResp from 90:94:e4:d2:6a:56 (capab=0xc11 status=0 aid=2) (entrée standard):[ 28.030579] wlan0: associated (entrée standard):[ 28.031443] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready (entrée standard):[ 64.677798] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 134.863515] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 135.261468] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 135.464020] wlan0: send auth to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 135.668056] wlan0: send auth to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 135.872048] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 138.554721] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 138.950754] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 139.152018] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 139.356034] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 139.560018] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 142.244187] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 142.641568] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 142.844040] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 143.048044] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 143.252038] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 145.932456] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 146.328310] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 146.532035] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 146.736044] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 146.940047] wlan0: authentication with 90:94:e4:d2:6a:56 timed out (entrée standard):[ 148.041997] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 152.628846] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 153.025956] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 153.228035] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 2/3) (entrée standard):[ 153.432036] wlan0: direct probe to 90:94:e4:d2:6a:56 (try 3/3) (entrée standard):[ 153.636059] wlan0: authentication with 90:94:e4:d2:6a:56 timed out grep: e: Aucun fichier ou dossier de ce type grep: ath9k: Aucun fichier ou dossier de ce type oumpa@lap-oumpa:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Et j'ai également eu le problème en sortie de veille oumpa@lap-oumpa:~$ dmesg |grep -e wlan0 -e wireless - e ath9k (entrée standard):[ 24.374058] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 24.375350] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 93.939308] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 93.946484] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 93.948503] wlan0: authenticated (entrée standard):[ 93.955493] wlan0: associating with AP with corrupt beacon (entrée standard):[ 93.956258] wlan0: associate with 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 93.960312] wlan0: RX AssocResp from 90:94:e4:d2:6a:56 (capab=0xc11 status=0 aid=3) (entrée standard):[ 93.960403] wlan0: associated (entrée standard):[ 93.961271] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready (entrée standard):[ 142.416984] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 267.450960] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 392.484797] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 517.416669] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 642.450021] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 731.526045] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=80.57.232.14 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=113 ID=10412 PROTO=UDP SPT=62076 DPT=59653 LEN=75 (entrée standard):[ 751.099943] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=98.92.180.142 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=107 ID=16706 PROTO=UDP SPT=22859 DPT=59653 LEN=75 (entrée standard):[ 768.406056] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 783.162810] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=124.168.49.52 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=3029 PROTO=UDP SPT=29378 DPT=59653 LEN=75 (entrée standard):[ 819.497707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=217.132.129.178 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=26558 PROTO=UDP SPT=27851 DPT=59653 LEN=75 (entrée standard):[ 833.219634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=62.255.232.114 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=6751 PROTO=UDP SPT=44096 DPT=59653 LEN=75 (entrée standard):[ 838.554992] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=31.29.211.30 DST=192.168.1.40 LEN=131 TOS=0x00 PREC=0x00 TTL=110 ID=20939 PROTO=UDP SPT=28782 DPT=59653 LEN=111 (entrée standard):[ 838.961321] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=113 ID=15283 PROTO=UDP SPT=52238 DPT=59653 LEN=75 (entrée standard):[ 838.968753] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=15290 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 842.173131] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=15658 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 848.441516] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=16309 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 893.439484] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 895.807103] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=24.77.54.203 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=116 ID=34953 PROTO=UDP SPT=40189 DPT=59653 LEN=75 (entrée standard):[ 915.616834] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=88.235.52.54 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=51 ID=38481 PROTO=UDP SPT=12408 DPT=59653 LEN=75 (entrée standard):[ 929.149248] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.56.93.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=48 ID=3585 PROTO=UDP SPT=44520 DPT=59653 LEN=38 (entrée standard):[ 935.936315] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=25452 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 939.075037] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=25805 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 945.667844] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=26558 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 987.611437] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.12.214.227 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=29457 PROTO=UDP SPT=35652 DPT=59653 LEN=38 (entrée standard):[ 990.714691] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.12.214.227 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=30297 PROTO=UDP SPT=35652 DPT=59653 LEN=38 (entrée standard):[ 995.141942] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=97.89.80.143 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=45 ID=0 DF PROTO=UDP SPT=52053 DPT=59653 LEN=38 (entrée standard):[ 998.150317] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=97.89.80.143 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=45 ID=0 DF PROTO=UDP SPT=52053 DPT=59653 LEN=38 (entrée standard):[ 1013.053142] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=220.233.12.78 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=107 ID=29277 PROTO=UDP SPT=49849 DPT=59653 LEN=38 (entrée standard):[ 1019.498528] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 1040.040006] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.207.145.115 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=52 ID=65369 PROTO=UDP SPT=25444 DPT=59653 LEN=75 (entrée standard):[ 1061.804785] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.194.22.124 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=44978 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1083.508559] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=90.222.218.33 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=8472 PROTO=UDP SPT=52238 DPT=59653 LEN=38 (entrée standard):[ 1102.080120] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=79.176.228.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=11246 PROTO=UDP SPT=20094 DPT=59653 LEN=38 (entrée standard):[ 1134.221457] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.156.13.168 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=2574 PROTO=UDP SPT=63186 DPT=59653 LEN=75 (entrée standard):[ 1141.996297] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=83.86.228.115 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=52 ID=19364 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1172.390827] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.58.123.28 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=51 ID=34736 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1194.933595] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=83.86.228.115 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=52 ID=44641 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1210.708901] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=70.79.66.115 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=53868 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1233.959743] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=89.134.77.194 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=23293 PROTO=UDP SPT=16087 DPT=59653 LEN=38 (entrée standard):[ 1250.906090] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=111.92.70.150 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=46 ID=34775 PROTO=UDP SPT=60879 DPT=59653 LEN=38 (entrée standard):[ 1265.827604] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.74.93.30 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=47137 PROTO=UDP SPT=53646 DPT=59653 LEN=75 (entrée standard):[ 1300.548513] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=79.176.228.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=22631 PROTO=UDP SPT=20094 DPT=59653 LEN=38 (entrée standard):[ 1303.681009] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=79.176.228.51 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=22804 PROTO=UDP SPT=20094 DPT=59653 LEN=38 (entrée standard):[ 1320.598826] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=186.19.46.240 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=4764 PROTO=UDP SPT=22500 DPT=59653 LEN=38 (entrée standard):[ 1340.837248] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.208.16.136 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=53 ID=7458 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1377.800565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=105.236.1.136 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=112 ID=31017 PROTO=UDP SPT=23024 DPT=59653 LEN=75 (entrée standard):[ 1380.559154] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=84.229.137.34 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=14304 PROTO=UDP SPT=31025 DPT=59653 LEN=38 (entrée standard):[ 1414.874938] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=220.233.104.215 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=108 ID=63320 PROTO=UDP SPT=28335 DPT=59653 LEN=38 (entrée standard):[ 1424.599687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=220.233.104.215 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=108 ID=64041 PROTO=UDP SPT=28335 DPT=59653 LEN=38 (entrée standard):[ 1443.223070] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=2.28.206.100 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=27383 PROTO=UDP SPT=17720 DPT=59653 LEN=75 (entrée standard):[ 1471.324528] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=195.182.146.122 DST=192.168.1.40 LEN=129 TOS=0x00 PREC=0x00 TTL=112 ID=5359 PROTO=UDP SPT=23170 DPT=59653 LEN=109 (entrée standard):[ 1490.903537] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=89.134.77.194 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=11616 PROTO=UDP SPT=16087 DPT=59653 LEN=38 (entrée standard):[ 1501.859829] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.102.238.59 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=23378 PROTO=UDP SPT=15444 DPT=59653 LEN=38 (entrée standard):[ 1521.273515] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=10836 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1549.567479] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.14.65.234 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=27835 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1561.517882] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.58.123.28 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=51 ID=55417 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 1584.237946] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=120.61.30.238 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=47 ID=20806 PROTO=UDP SPT=11536 DPT=59653 LEN=75 (entrée standard):[ 1597.574215] wlan0: deauthenticating from 90:94:e4:d2:6a:56 by local choice (reason=3) (entrée standard):[ 1710.513007] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready (entrée standard):[ 1715.476678] wlan0: authenticate with 90:94:e4:d2:6a:56 (entrée standard):[ 1715.489049] wlan0: send auth to 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 1715.491074] wlan0: authenticated (entrée standard):[ 1715.497563] wlan0: associating with AP with corrupt beacon (entrée standard):[ 1715.500070] wlan0: associate with 90:94:e4:d2:6a:56 (try 1/3) (entrée standard):[ 1715.504166] wlan0: RX AssocResp from 90:94:e4:d2:6a:56 (capab=0xc11 status=0 aid=3) (entrée standard):[ 1715.504256] wlan0: associated (entrée standard):[ 1715.505204] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready (entrée standard):[ 1730.305546] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 1754.786892] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=801 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 1758.161750] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=1181 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 1759.090101] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=151.32.245.102 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=54523 PROTO=UDP SPT=22496 DPT=59653 LEN=38 (entrée standard):[ 1764.508810] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=1968 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 1764.955625] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=81.98.226.158 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=115 ID=14611 PROTO=UDP SPT=56154 DPT=59653 LEN=38 (entrée standard):[ 1774.519661] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=81.98.226.158 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=115 ID=18039 PROTO=UDP SPT=56154 DPT=59653 LEN=38 (entrée standard):[ 1834.224178] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=17980 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1837.336063] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=18272 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1843.812995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=18884 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1855.397420] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 1873.437599] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14608 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 1876.809734] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14611 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 1881.611970] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=178.191.186.216 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=52 ID=47871 PROTO=UDP SPT=60000 DPT=59653 LEN=28 (entrée standard):[ 1883.331092] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14613 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 1935.277466] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=27555 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1938.567180] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=27898 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1944.954786] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=28531 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 1967.219070] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.160.33.28 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=4168 PROTO=UDP SPT=12743 DPT=59653 LEN=75 (entrée standard):[ 1980.493396] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2015.497269] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=201.83.130.199 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=26048 PROTO=UDP SPT=55710 DPT=59653 LEN=38 (entrée standard):[ 2020.543353] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=190.231.94.68 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=29440 PROTO=UDP SPT=64609 DPT=59653 LEN=38 (entrée standard):[ 2040.553047] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=67.83.29.46 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=112 ID=14617 PROTO=UDP SPT=34519 DPT=59653 LEN=38 (entrée standard):[ 2062.151883] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.85.156.51 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=52 ID=7982 PROTO=UDP SPT=10633 DPT=59653 LEN=75 (entrée standard):[ 2081.939014] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.175.8.166 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=23359 PROTO=UDP SPT=39465 DPT=59653 LEN=38 (entrée standard):[ 2126.170640] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=91.153.62.95 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=19149 PROTO=UDP SPT=57555 DPT=59653 LEN=38 (entrée standard):[ 2129.301479] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=91.153.62.95 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=19183 PROTO=UDP SPT=57555 DPT=59653 LEN=38 (entrée standard):[ 2141.292364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.219.229.48 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=20686 PROTO=UDP SPT=14063 DPT=59653 LEN=38 (entrée standard):[ 2161.947708] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=69.171.135.88 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=109 ID=30662 PROTO=UDP SPT=52394 DPT=59653 LEN=38 (entrée standard):[ 2180.779086] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=86.131.227.154 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=968 PROTO=UDP SPT=37508 DPT=59653 LEN=38 (entrée standard):[ 2210.214811] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=142.161.150.73 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=7473 PROTO=UDP SPT=42221 DPT=59653 LEN=75 (entrée standard):[ 2219.820455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=5.55.143.70 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=22000 PROTO=UDP SPT=23100 DPT=59653 LEN=38 (entrée standard):[ 2250.601778] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.77.105.79 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=3264 PROTO=UDP SPT=57921 DPT=59653 LEN=38 (entrée standard):[ 2261.198314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=31678 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2281.740059] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=188.4.70.135 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=15661 PROTO=UDP SPT=42914 DPT=59653 LEN=38 (entrée standard):[ 2305.697614] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=190.16.105.104 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=43 ID=0 DF PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 2320.476787] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=188.4.36.7 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=2395 DF PROTO=UDP SPT=53094 DPT=59653 LEN=38 (entrée standard):[ 2355.550727] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2360.263186] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=94.175.8.166 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=824 PROTO=UDP SPT=39465 DPT=59653 LEN=38 (entrée standard):[ 2388.989681] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=197.163.27.6 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=3180 PROTO=UDP SPT=13662 DPT=59653 LEN=75 (entrée standard):[ 2419.111319] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=186.52.131.16 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=46 ID=12036 PROTO=UDP SPT=24797 DPT=59653 LEN=38 (entrée standard):[ 2422.129755] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=186.52.131.16 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=46 ID=12037 PROTO=UDP SPT=24797 DPT=59653 LEN=38 (entrée standard):[ 2440.798577] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=69.171.135.88 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=109 ID=6599 PROTO=UDP SPT=52394 DPT=59653 LEN=38 (entrée standard):[ 2480.327857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=109.186.73.51 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=110 ID=27228 PROTO=UDP SPT=10035 DPT=59653 LEN=75 (entrée standard):[ 2480.482212] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2505.439606] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=92.239.224.137 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=16891 PROTO=UDP SPT=24874 DPT=59653 LEN=38 (entrée standard):[ 2520.326512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=216.8.132.206 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=23722 PROTO=UDP SPT=24252 DPT=59653 LEN=38 (entrée standard):[ 2542.139479] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=227 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2561.215791] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=178.174.218.92 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=109 ID=21615 PROTO=UDP SPT=44003 DPT=59653 LEN=38 (entrée standard):[ 2582.414098] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=78.56.233.179 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=39904 PROTO=UDP SPT=32656 DPT=59653 LEN=38 (entrée standard):[ 2605.516221] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2622.833575] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=69.255.40.5 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=112 ID=9040 PROTO=UDP SPT=46587 DPT=59653 LEN=75 (entrée standard):[ 2642.591036] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.77.105.79 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=23338 PROTO=UDP SPT=57921 DPT=59653 LEN=38 (entrée standard):[ 2694.101037] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=18158 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2695.587437] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.241.8.40 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=7133 PROTO=UDP SPT=47908 DPT=59653 LEN=38 (entrée standard):[ 2703.476545] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=96.49.53.116 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=19215 PROTO=UDP SPT=20915 DPT=59653 LEN=38 (entrée standard):[ 2730.549805] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2741.555802] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=87.218.129.117 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=114 ID=51510 PROTO=UDP SPT=25900 DPT=59653 LEN=75 (entrée standard):[ 2774.766776] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=190.194.164.229 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=43 ID=29859 PROTO=UDP SPT=60006 DPT=59653 LEN=75 (entrée standard):[ 2796.621132] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.241.8.40 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=50261 PROTO=UDP SPT=47908 DPT=59653 LEN=38 (entrée standard):[ 2799.728894] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.241.8.40 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=54068 PROTO=UDP SPT=47908 DPT=59653 LEN=38 (entrée standard):[ 2819.736079] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=46.12.47.190 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=115 ID=15148 PROTO=UDP SPT=53080 DPT=59653 LEN=38 (entrée standard):[ 2855.583102] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 2868.305737] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=188.4.10.101 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=50 ID=36899 PROTO=UDP SPT=51422 DPT=59653 LEN=38 (entrée standard):[ 2895.625459] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=112.207.183.98 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=4325 PROTO=UDP SPT=29276 DPT=59653 LEN=38 (entrée standard):[ 2901.612169] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=88.4.172.58 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 2927.259899] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.2.165.29 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=15369 PROTO=UDP SPT=56892 DPT=59653 LEN=38 (entrée standard):[ 2965.894396] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=24.64.205.106 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=25444 PROTO=UDP SPT=11140 DPT=59653 LEN=38 (entrée standard):[ 2968.439507] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=142.68.204.60 DST=192.168.1.40 LEN=95 TOS=0x00 PREC=0x00 TTL=111 ID=9076 PROTO=UDP SPT=13176 DPT=59653 LEN=75 (entrée standard):[ 2980.515084] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 3019.653204] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=85.218.101.137 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=0 DF PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 3037.295512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.248.59.144 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=7792 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 3040.175927] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=77.248.59.144 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=49 ID=2659 PROTO=UDP SPT=51413 DPT=59653 LEN=38 (entrée standard):[ 3070.551776] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.181.220.237 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=55424 PROTO=UDP SPT=24417 DPT=59653 LEN=38 (entrée standard):[ 3080.195302] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=76.181.220.237 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=113 ID=56646 PROTO=UDP SPT=24417 DPT=59653 LEN=38 (entrée standard):[ 3105.548910] [UFW BLOCK] IN=wlan0 OUT= MAC=01:00:5e:00:00:01:00:24:c9:41:a5:b0:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=36 TOS=0x08 PREC=0x80 TTL=1 ID=0 DF PROTO=2 (entrée standard):[ 3130.350131] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=47 ID=14963 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3151.388237] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=48 ID=19291 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3180.574097] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=48 ID=26054 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3183.925917] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=114.78.37.163 DST=192.168.1.40 LEN=48 TOS=0x00 PREC=0x00 TTL=48 ID=26776 PROTO=UDP SPT=63724 DPT=59653 LEN=28 (entrée standard):[ 3224.265486] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.2.165.29 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=1523 PROTO=UDP SPT=56892 DPT=59653 LEN=38 (entrée standard):[ 3227.701115] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=177.2.165.29 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=2203 PROTO=UDP SPT=56892 DPT=59653 LEN=38 (entrée standard):[ 3241.520140] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=213.57.65.8 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=111 ID=5524 PROTO=UDP SPT=57708 DPT=59653 LEN=38 (entrée standard):[ 3260.165072] [UFW BLOCK] IN=wlan0 OUT= MAC=00:26:b6:66:d3:d6:00:24:c9:41:a5:b0:08:00 SRC=24.64.205.106 DST=192.168.1.40 LEN=58 TOS=0x00 PREC=0x00 TTL=114 ID=3288 PROTO=UDP SPT=11140 DPT=59653 LEN=38 (entrée standard):[ 3313.381138] wlan0: deauthenticating from 90:94:e4:d2:6a:56 by local choice (reason=3) (entrée standard):[ 3322.761675] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready grep: e: Aucun fichier ou dossier de ce type grep: ath9k: Aucun fichier ou dossier de ce type oumpa@lap-oumpa:~$ cat /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true oumpa@lap-oumpa:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Voila En espérant que tu puisse en tirer quelque chose. Hors ligne toutafai Re : Déconnexion sur Batterie [Samsung] tu peux donner le resultat de ping -c3 192.168.1.20 Ubuntu Server 12.04 x32 sur IBM P4. - XP / Seven / Ubuntu 14.04 x64 sur Lenovo ThinkPad. Purée d'unity...difficile de s'y faire - canon MG5350 - Logitech Quickcam 3000 - TNT Intuix S800 - Freebox v6 révolution (Trop de la balle !!!) Utilisateur d'Ubuntu depuis novembre 2006. 23 PC libérés grâce aux OS libres...et de 24 et quel 24 ...car pc au taff et pour le taff ^^ Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Je suis un wifi différent donc je doute que l'ip demandé soit la même, aucune idée de ce qu'est le mais j'ai été chercher la nouvel ip du gateway ipv4 voila le résultat de la cmd pour les 2 ip, en précisant que le wifi est fonctionnel, et que je ne sais pas trop ce que je fais PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=3.02 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=3.19 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=3.32 ms --- 192.168.1.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 3.020/3.180/3.324/0.140 ms ping -c3 192.168.1.20 PING 192.168.1.20 (192.168.1.20) 56(84) bytes of data. From 192.168.1.41 icmp_seq=1 Destination Host Unreachable From 192.168.1.41 icmp_seq=2 Destination Host Unreachable From 192.168.1.41 icmp_seq=3 Destination Host Unreachable --- 192.168.1.20 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2016ms pipe 3 J' Hors ligne toutafai Re : Déconnexion sur Batterie [Samsung] mais justement une gateway sur 192.168.1.20 c'est pas courant.... la gateway (paserelle en français) désigne la machine (routeur/pc qui partage internet) par laquelle vont transiter les informations pour allez (et venir) d'internet....c'est toi qui a mis cette info, c'est bien cela ? Ubuntu Server 12.04 x32 sur IBM P4. - XP / Seven / Ubuntu 14.04 x64 sur Lenovo ThinkPad. Purée d'unity...difficile de s'y faire - canon MG5350 - Logitech Quickcam 3000 - TNT Intuix S800 - Freebox v6 révolution (Trop de la balle !!!) Utilisateur d'Ubuntu depuis novembre 2006. 23 PC libérés grâce aux OS libres...et de 24 et quel 24 ...car pc au taff et pour le taff ^^ Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Chez moi je suis l'unique poste ubuntu au sein d'un réseau windows avec entre autre un vpn et toute une floppée de serveur dont je ne suis pas admin, ce qui pourrais expliquer l'ip "étrange" vu qu'il ne vient pas d'une box je ne comprend pas trop "mis" l'info, j'ai simplement c/c ce qu'avais pondu le "wificheck" et j'en ai refais un pour le wifi que j'utilise actuellement. Mais mon soucis vient très clairement de ma machine car ca m'arrive partout. Hors ligne Alex0000 Re : Déconnexion sur Batterie [Samsung] Bon il semblerais que j'ai atteint un nouveau niveau, la déconnexion se fait également sur secteur ( chose que je n'avais jamais constater en 2 ans d'utilisation...). Juste la commande de redémarrage de la carte me serais bien utile. Dernière modification par Alex0000 (Le 25/12/2012, à 15:42) Hors ligne compte supprimé Re : Déconnexion sur Batterie [Samsung] Salut, Sur batterie tu peux essayer de désactiver le mode économie d'énergie de la carte wifi. Voir ici : http://forum.ubuntu-fr.org/viewtopic.php?id=1042031 Alex0000 Re : Déconnexion sur Batterie [Samsung] Je voulais dire secteur ... pas batterie Désolé. je vais quand même jeter un coup d'oeil sur le thread Hors ligne
Right now, I have Ubuntu 13.04, and using sudo-apt-get dist-upgrade does not get me the 13.10 update. Whenever I sudo-apt-update or upgrade lately, I get: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. It has been for a couple of months (I'm not actually sure, but it has been a long time), and I want to fix this somehow. Edit: from sudo cat /etc/apt/sources.list Edit 2: from sudo vim /etc/apt/sources.list to comment out the last two lines Edit 3: I found out the problem on my own. # deb cdrom:[Ubuntu 12.10 _Quantal Quetzal_ - Release amd64 (20121017.5)]/ quantal main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ raring main multiverse restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ raring-updates main multiverse restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ raring universe deb http://us.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ raring-backports main universe multiverse restricted deb http://security.ubuntu.com/ubuntu raring-security main multiverse restricted deb http://security.ubuntu.com/ubuntu raring-security universe ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner # deb-src http://archive.canonical.com/ubuntu quantal partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main deb-src http://extras.ubuntu.com/ubuntu raring main ## deb http://www.duinsoft.nl/pkg debs all ## deb http://ppa.launchpad.net/person/ppa/ubuntu karmic main
You should use the new print() statement, available with Python 2.6+ from __future__ import print_function print("hi there", file=f) The alternative would be to use: f = open('myfile','w') f.write('hi there\n') # python will convert \n to os.linesep f.close() # you can omit in most cases as the destructor will call if Quoting from Python documentation regarding newlines:On output, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string.
Kanor Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... D'aprés ce que je comprend Salome utilise code_aster Sinon pdf qui me semble intéressant http://calcul.math.cnrs.fr/Documents/Journees/dec2006/aster.pdf ça date un peu 2006 Hors ligne ossatureLibre Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Bonjour tout le monde, Effectivement pyopenshelter demande un bon ménage, voire même de le reprendre à zéro en le considérant comme une maquette (ça n'est pas forcemment un boulot enorme). ceci dit il fonctionne, je m'occupe de mettre la dernière "version" en ligne ce soir. Pour l'instant c'est juste une API permettant de piloter le code aster et de faire un post traitement selon les eurocodes. Pour l'installation du code aster, je trouve que le plus simple est de passer par Salome dont l'installation est assez simple et qui installe le code aster en même temps. Le mieux est de vous brancher sur le forum du code aster pour les problèmes d'installation et de prise en main.Il y a une communauté trés active. quelques conseils pour la prise en main du code aster: A la base c'est un logiciel qui ne fait que le calcul: pas d'interface graphique ni de visualisation intégrée:les données sont entrées dans 2 (en général) fichiers textes: - le fichier de maillage pour la géométrie - le fichier de commandes pour les chargements, les options de calcul, etc Le fichier de commande a cette particularité qu'il accepte des commandes en python, ce qui a déterminé mon choix de python pour pyopenshelter, les utilisateurs du code_aster sont en effet sensés connaitre python. Le calcul peut ce lancer par la console avec une ligne de commande, mais il faut pour renseigner le code aster un fichier (.export) qui indique les fichiers de données, et des renseignements sur la machine de l'utilisateur, il n'y a pas de doc trés claire sur ce fichier, le mieux est d'en créer un automatiquement avec l'interface ASTK (voir plus bas) Il existe quelques outils qui peuvent remplir le role d'interface graphique: - Salome meca: permet de modéliser une structure par une interface graphique, et de lancer un calcul du code aster directement. C'est un logiciel adapté à l'industrie, bien pour les éléments finis, mais pas bien adapté aux structures de type poutres. - ASTK: c'est une interface graphique qui permet de lancer les calculs: on indique dans une fenêtre le répertoire de travail, les fichiers de données (maillage et fichier de commande) et le nom des fichiers de résultats, et le calcul est lancé en cliquant - gmsh: logiciel libre de maillage qui peut servir à faire le modèle géométrique et à visualiser les résultats, ce logiciel à l'honneur d'être intégré dans la logithèque d'Ubuntu (logiciels d'education) A noter qu'il existe une distribution linux ( caeLinux ) qui est livrée avec le codeaster et plein d'autres logiciels de calcul. Quand à pyopenshelter, il permet de modéliser des structures de poutres dans un langage "pythonique", de visualiser les résultats en 3D (avec VTK), de lancer un calcul par le code aster, il y a quelques modules d'export (dxf, pdf), un module de combinaisons ELS ELU Eurocodes (pas encore sur le site), vérification de fleches et sections par eurocode5, calcul d'assemblages (pointes et boulons) par eurocode5... bref il fait beaucoup de choses, plus tout ce qu'on peut imaginer avec les très nombreuses bibliothèques python existantes. Mais dans l'immédiat il faut le voir comme un projet en début de développement, dans une optique plus vaste consistant à offrir au développeur python des outils de modélisation pour le bâtiment dans son langage préféré. Gérard Hors ligne j1100 Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... bon alors 2-3 questions juste comme ça... 1) ça sert à quoi de sortir de bazooka de la mort pour tuer une mouche? aster, c'est de la mef, non? je pense que pybar suffit largement pour des problèmes aux EC 2) est-ce que quelqu'un a les EC, même en partie. je sais que ça coûte bonbon! 3) je suis partant pour géotech, et mixte. je pense que ba/bp, il y en a pas mal qui connaisse. pareil pour cm. Hors ligne mc@ubuntu Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Bonjour, Comme dit ossatureLibre, si vous avez des problèmes d'installation de Code_Aster, allez sur www.code-aster.org (problème avec l'hébergement du site ces jours-ci...). Et pour ces problèmes d'installation, si quelqu'un veut se pencher sur la fabrication de paquets, à dispo pour en parler ! Si vous décidez d'utiliser Code_Aster dans votre nouvel outil, n'hésitez pas à vous faire connaître, demander des infos sur la meilleure manière de s'interfacer avec Code_Aster... Bravo pour votre analyse SaloméMéca / Code_Aster / astk : c'est tout juste ! (et pour les connaisseurs et ceux qui veulent lancer un calcul aster à la main : as_run pourra se passer d'un fichier export pour les calculs simples dans la prochain livraison) MC Dernière modification par mc@ubuntu (Le 20/01/2010, à 10:45) Hors ligne peterp@n Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... bon alors 2-3 questions juste comme ça... 1) ça sert à quoi de sortir de bazooka de la mort pour tuer une mouche? aster, c'est de la mef, non? je pense que pybar suffit largement pour des problèmes aux EC Qu'est ce que ça veut dire "mef"? pyBar est plus simple mais oblige à décomposer le projet. Mais la possibilité de pouvoir créer la structure entière en une fois est intéressante aussi. Pour l'instant je trouve bien de découvrir et tester tout ce qui existe déjà. 2) est-ce que quelqu'un a les EC, même en partie. je sais que ça coûte bonbon! J'ai toutes les EC en .rtf et/ou .pdf. 3) je suis partant pour géotech, et mixte. je pense que ba/bp, il y en a pas mal qui connaisse. pareil pour cm. Excusez mon ignorance mais que veut dire "géotech et mixte", "ba/bp", et "cm" ? Ubuntu 12.04 64bits, Raspbian “wheezy”, Tango Studio sauce debian Un bureau d'études techniques pour le bâtiment avec des logiciels libre. Entreprise de construction bois. - Formations FreeCAD (logiciel 3d) Hors ligne mc@ubuntu Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... 1) ça sert à quoi de sortir de bazooka de la mort pour tuer une mouche? aster, c'est de la mef, non? je pense que pybar suffit largement pour des problèmes aux EC Certes, Code_Aster n'est pas développé dans cet objectif, mais il le fait très bien. A vous de choisir ! Si vous développez un module python pour les eurocodes, cela peut potentiellement intéresser d'autres utilisateurs de Code_Aster. Çà pourrait être intégré. MC PS MEF : méthode des éléments finis Dernière modification par mc@ubuntu (Le 20/01/2010, à 14:46) Hors ligne ossatureLibre Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Je n'utilise pas le code-aster et pyopenshelter pour tuer des mouches mais pour modéliser des structures dans un cadre professionnel, pour cela j'ai besoin: - de pouvoir modéliser en 3D pour etudier le contreventement de structures assymétriques - de pouvoir faire des calculs dynamiques ou modaux (séisme) - de pouvoir faire des calculs non linéaires (cables, flambement) - de pouvoir faire des calculs par éléments finis volumiques (modélisation du sol par exemple) - etc Tout cela le code_aster le fait très bien et même mieux que la plupart des logiciels du commerce. Mais forcemment cela a un coût: la complexité (que l'on retrouve dans une certaine mesure dans pyopenshelter!) Ce que le code_aster (et salome) fait mal ou pas: - la modélisation de systèmes de poutres (en particulier visualisation des sections, modélisation des liaisons entre poutres ...) pyopenshelter a été créé à l'origine dans ce but. - la vérification des sections et assemblages selon les eurocodes Le fait de le proposer sous forme de bibliothèque python offre de plus des possibilités de modélisation paramétrique énormes. j'ai mis la dernière "version" de pyopenshelter en ligne: http://www.pyopenshelter.com/doc/download/download.html Le danger de la programmation en particulier avec python, c'est que dés qu'on a besoin d'une fonctionnalité, il suffit ... de la rajouter! Ce qui explique qu'en un an de développement anarchique j'ai produit une bibliothèque hypertrophiée dans laquelle les visiteurs risquent de se perdre. D'où la necessité de créer une communauté d'utilisateurs et de développeurs (en fait tout utilisateur est développeur puisqu'un modèle n'est rien d'autre qu'un script python) pour proposer une API cohérente et bien documentée Gérard Hors ligne j1100 Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... géotech, c'est l'eurocode 7: calcul géotechnique ba/bp: béton armé et béton précontraint, cm:construction métallique. Hors ligne peterp@n Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Salut à tous, Je suis en train de tester CaeLinux2009 en live-cd. Ca dépote ! Pas besoin de s'embéter à installer Salome ou code_aster sur Karmic ou autres. Mais j'arrive pas à faire grand chose... J'ai l'impression que c'est trop complet par rapport à ce que j'imaginais. Du coup je me sens dépassé par l'ampleur de la tâche. Je me demande si je dois persister à comprendre Salomé, code_aster etc... ou si, comme à l'air de penser j1100, il vaut mieu partir de quelque chose de plus simple comme pyBar. Pour l'instant je me sens incapable de produire quoique ce soit avec salome et code_aster. Si il y en a parmis vous qui utilise CaeLinux j'ai 2 questions : Faut il faire les MAJ ? Comme je suis en live_cd je peux pas mettre la distrib en full french mais j'imagine que tous les logiciels sont en anglais, non ? Existe t il des tutoriels en français pour débuter avec salome et code_aster et orientés structure ? (c'est pas que je sois anglophobe mais pour débuter j'aime bien ma langue maternelle) A Gérard : Dans Salome, est ce que tu modélises les éléments d'une structure avec des noeuds et des lignes ou avec des volumes ? Comment est ce que je peux appliquer des efforts sur ma structure directement dans salome ? exemple : http://www.hiboox.com/go/pictures/priva … 9.png.html Bonne journée ! Ubuntu 12.04 64bits, Raspbian “wheezy”, Tango Studio sauce debian Un bureau d'études techniques pour le bâtiment avec des logiciels libre. Entreprise de construction bois. - Formations FreeCAD (logiciel 3d) Hors ligne wwiki Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Je n'utilise pas le code-aster et pyopenshelter pour tuer des mouches mais pour modéliser des structures dans un cadre professionnel, pour cela j'ai besoin: - de pouvoir modéliser en 3D pour etudier le contreventement de structures assymétriques - de pouvoir faire des calculs dynamiques ou modaux (séisme) - de pouvoir faire des calculs non linéaires (cables, flambement) - de pouvoir faire des calculs par éléments finis volumiques (modélisation du sol par exemple) - etc Tout cela le code_aster le fait très bien et même mieux que la plupart des logiciels du commerce. Mais forcemment cela a un coût: la complexité (que l'on retrouve dans une certaine mesure dans pyopenshelter!) Ce que le code_aster (et salome) fait mal ou pas: - la modélisation de systèmes de poutres (en particulier visualisation des sections, modélisation des liaisons entre poutres ...) pyopenshelter a été créé à l'origine dans ce but. - la vérification des sections et assemblages selon les eurocodes Le fait de le proposer sous forme de bibliothèque python offre de plus des possibilités de modélisation paramétrique énormes. j'ai mis la dernière "version" de pyopenshelter en ligne: http://www.pyopenshelter.com/doc/download/download.html Le danger de la programmation en particulier avec python, c'est que dés qu'on a besoin d'une fonctionnalité, il suffit ... de la rajouter! Ce qui explique qu'en un an de développement anarchique j'ai produit une bibliothèque hypertrophiée dans laquelle les visiteurs risquent de se perdre. D'où la necessité de créer une communauté d'utilisateurs et de développeurs (en fait tout utilisateur est développeur puisqu'un modèle n'est rien d'autre qu'un script python) pour proposer une API cohérente et bien documentée Gérard Bonjour , Je trouve la proposition intéressante . Par contre ma plus grosse difficulté est plus informative. Ou puis-je consulter les EUROCODES ? De plus , je vous suggère de faire un poste de lancement sur le site http://caelinux.com/CMS/index.php?option=com_joomlaboard&Itemid=52 Hors ligne peterp@n Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Salut wwiki, Bienvenue parmis nous ! Contactes moi pour les Eurocodes. Pour ce qui est de lancer le projet à plus grande échelle j'attends d'avoir finit quelques épreuves et projet en cours (étudiant en licence professionnelle) pour m'y replonger sérieusement. A+ Dernière modification par Peterpan12 (Le 13/02/2010, à 14:17) Ubuntu 12.04 64bits, Raspbian “wheezy”, Tango Studio sauce debian Un bureau d'études techniques pour le bâtiment avec des logiciels libre. Entreprise de construction bois. - Formations FreeCAD (logiciel 3d) Hors ligne wwiki Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Re - bonjour à tous, Pour ma part , je trouve l'orientation donnée par ossatureLibre , la plus intéressante sur le long terme et la plus rapide à mettre en oeuvre par rapport à l'ensemble des Eurocodes. En plus Peterpan12 comme tu connais python , il n'y a pas de problème. Hors ligne peterp@n Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Oula, "connaitre" python est un bien grand mot. Je n'y connais pas grand chose en réalité, même si je comprend vite, je n'ai eu aucune formation en programmation. Et le script que je montre au début du topic est tout simplement la transposition de codes trouvés sur Internet. Maintenant il est vrai que l'orientation donnée par ossatureLibre répond a un besoin professionnel et ceci pour tout les corps de métiers. Je suis aussi d'accord sur le fait que c'est la solution la plus intéressante sur le long terme mais cela va demandé une étude approfondie de code_aster (pour ma part en tous cas). Peterpan12, pressé de finir le semestre et de partir en stage pour se pencher sur ce projet. Ubuntu 12.04 64bits, Raspbian “wheezy”, Tango Studio sauce debian Un bureau d'études techniques pour le bâtiment avec des logiciels libre. Entreprise de construction bois. - Formations FreeCAD (logiciel 3d) Hors ligne wwiki Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Merci pour ta réponse Peterpan12 , Quand tu es disponibles , je peux te faire un topo rapide sur les outils qu'ossatureLibre a décrit . C'est vrai qu'ils sont d'un abord rude, mais pour ma part , je trouve que l'on va vraiment beaucoup plus loin qu'avec les logiciels commerciaux. Allez bon fin de semestre . Dernière modification par wwiki (Le 18/02/2010, à 23:47) Hors ligne fabdark Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... salut je suis intéressé par votre projet y a t'il une première version fonctionnelle ? Hors ligne peterp@n Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... salut fabdark, Le projet que nous développons ici n'a aucune version pour l'instant. Les seules choses que tu peus tester c'est : pyOpenShelter de ossatureLibre et aussi mon script dans mon premier message. Voilà, si tu as des scripts ou des programmes dans ce domaine, faits nous en part. Ubuntu 12.04 64bits, Raspbian “wheezy”, Tango Studio sauce debian Un bureau d'études techniques pour le bâtiment avec des logiciels libre. Entreprise de construction bois. - Formations FreeCAD (logiciel 3d) Hors ligne fabdark Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... malheureusement je suis en BTS SCBH mais je n'ai aucune notion de programmation donc vous donner un coup de main serai un grand plaisir mais a par le calcul aux eurocodes je m'en arrête là, mais j'imagine que vous connaissez bien tout cela. sinon je veut bien testé le programme de Ossature libre si il veut bien m'expliquer comment faut l'installer merci d'avance a lui. car je dois avoué j'aime bien son concept de modéliseur sinon je peut répondre a certaine question sur les eurocodes sur le bois et le vent si il y a besoins je suivrai ce sujet de près en m'impliquant au maximum Hors ligne Julius Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Un de mes prof' est assez intéressé par la démarche... Il aimerait savoir si certains d'entre vous seraient intéressés par s'impliquer vraiment dans la programmation, en partant de ce qu'ils ont déjà fait : http://www.geuz.org/gmsh/ Apparemment le maillage fonctionne super bien et là ils sont en train d'implémenter les éléments finis dans l'idée de pouvoir communiquer avec d'autres programmes déjà existants (genre code aster j'imagine). Bref, si ça vous tente, c'est un projet d'une grosse envergure mais il faut du temps et connaitre le c++ Les logiciels libres à Sambreville avec le SambreLUG. Hors ligne Brain Bug Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Bonsoir, Je suis pas mal intéressé par ce projet que je compte suivre. De là à m'impliquer, je pense ne pas avoir le niveau. (et dans le cas contraire, je me manifesterai) Sinon j'ai une question sur ces propos : Un de mes prof' est assez intéressé par la démarche... Il aimerait savoir si certains d'entre vous seraient intéressés par s'impliquer vraimentdans la programmation, en partant de ce qu'ils ont déjà fait : http://www.geuz.org/gmsh/ Apparemment le maillage fonctionne super bien et là ils sont en train d'implémenter les éléments finis dans l'idée de pouvoir communiquer avec d'autres programmes déjà existants (genre code aster j'imagine). Bref, si ça vous tente, c'est un projet d'une grosse envergure mais il faut du temps et connaitre le c++ Au boulot on utilise gmsh pour mailler les pièces et calculer avec code aster si j'ai bien compris. Donc il communique déjà avec Aster non ? J'ai raté quelque chose ? Hors ligne captnfracasse Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Salut, salut !! Je suis aussi en SCBH sur Nantes. Ce projet m'interesse beaucoup. J'avais déjà commencé un site en php sur le calcul de charge (malheureusement, j'ai tout perdu) . Je suis en fin de deuzième année (donc bientot le projet) et ce petit programme m'avait aidé à chaque fois. Je vais rééssayer Salome et code aster. Je les avais déjà testé au debut de ma formation et cela m'avait paru très compliqué. Je voudrais lancer un site communautaire sur la construction (open). J'ai déjà mis un site nuked >> http://www.konstruction.tk/ . et un chan irc sur spidernet.org : #konstruction . A+ Baptiste. captnfracasse Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... up stephane creativbois Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... bonjour a tous je de tout coeur avec vous ! c'est beau des projets comme ceux ci je suis dans le bois mais pas dans la programmation désolé (vous voyez l'intérêt que je porte a votre projet lol ) car les calcul a la main ....:P bien des logiciels mais pas donné et je le comprend vu le travail dessus Hors ligne toitoinebzh Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... salut pour ceux intéresser scilab développe un module pour l'eurocodes http://www.scilab.org/projects/development/eurocodes toitoinebzh@jabber.fr http://toitoinebzh.blog.free.fr Hors ligne VinsS Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Bonjour, Une version PyQt du script de Peterpan12 #! /usr/bin/env python # -*- coding: utf-8 -*- import os from xml.dom import minidom from PyQt4 import QtCore, QtGui class TopWindow(object): c_folder = os.getcwdu() f_datas = os.path.join(c_folder, "materiau.xml") gM = 0 kh = 0 def setup_gui(self, MainWindow): MainWindow.resize(350, 500) MainWindow.setWindowTitle("Calcul d'ossature") self.centralwidget = QtGui.QWidget(MainWindow) self.gridLayout_2 = QtGui.QGridLayout(self.centralwidget) self.verticalLayout = QtGui.QVBoxLayout() self.gridLayout = QtGui.QGridLayout() self.title_0 = QtGui.QLabel(self.centralwidget) self.font = QtGui.QFont() self.font.setFamily("AlMateen") self.font.setWeight(75) self.font.setBold(True) self.title_0.setFont(self.font) self.title_0.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.title_0, 0, 0, 1, 1) spacerItem = QtGui.QSpacerItem(108, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum) self.gridLayout.addItem(spacerItem, 0, 1, 1, 1) self.label_0 = QtGui.QLabel(self.centralwidget) self.label_0.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_0, 1, 0, 1, 1) self.class_meca_cmb = QtGui.QComboBox(self.centralwidget) self.gridLayout.addWidget(self.class_meca_cmb, 1, 1, 1, 1) self.label_1 = QtGui.QLabel(self.centralwidget) self.label_1.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_1, 2, 0, 1, 1) self.base_spb = QtGui.QSpinBox(self.centralwidget) self.base_spb.setMaximum(9000000) self.gridLayout.addWidget(self.base_spb, 2, 1, 1, 1) self.label_2 = QtGui.QLabel(self.centralwidget) self.label_2.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_2, 3, 0, 1, 1) self.height_spb = QtGui.QSpinBox(self.centralwidget) self.height_spb.setMaximum(9000000) self.gridLayout.addWidget(self.height_spb, 3, 1, 1, 1) self.label_3 = QtGui.QLabel(self.centralwidget) self.label_3.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_3, 4, 0, 1, 1) self.ft_lbl = QtGui.QLabel(self.centralwidget) self.gridLayout.addWidget(self.ft_lbl, 4, 1, 1, 1) self.label_4 = QtGui.QLabel(self.centralwidget) self.label_4.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_4, 5, 0, 1, 1) self.gm_lbl = QtGui.QLabel(self.centralwidget) self.gridLayout.addWidget(self.gm_lbl, 5, 1, 1, 1) self.label_5 = QtGui.QLabel(self.centralwidget) self.label_5.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_5, 6, 0, 1, 1) self.kh_lbl = QtGui.QLabel(self.centralwidget) self.gridLayout.addWidget(self.kh_lbl, 6, 1, 1, 1) self.title_2 = QtGui.QLabel(self.centralwidget) self.title_2.setFont(self.font) self.title_2.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.title_2, 7, 0, 1, 1) spacerItem1 = QtGui.QSpacerItem(108, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum) self.gridLayout.addItem(spacerItem1, 7, 1, 1, 1) self.label_8 = QtGui.QLabel(self.centralwidget) self.label_8.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.gridLayout.addWidget(self.label_8, 8, 0, 1, 1) self.N_dsb = QtGui.QSpinBox(self.centralwidget) self.N_dsb.setMaximum(999999999) self.gridLayout.addWidget(self.N_dsb, 8, 1, 1, 1) self.label_9 = QtGui.QLabel(self.centralwidget) self.label_9.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing |QtCore.Qt.AlignVCenter) self.label_9.setObjectName("label_9") self.gridLayout.addWidget(self.label_9, 9, 0, 1, 1) self.kmod_dsb = QtGui.QDoubleSpinBox(self.centralwidget) self.kmod_dsb.setMaximum(999999999) self.gridLayout.addWidget(self.kmod_dsb, 9, 1, 1, 1) self.verticalLayout.addLayout(self.gridLayout) self.horizontalLayout = QtGui.QHBoxLayout() self.kh_btn = QtGui.QToolButton(self.centralwidget) self.horizontalLayout.addWidget(self.kh_btn) self.proceed_btn = QtGui.QToolButton(self.centralwidget) self.horizontalLayout.addWidget(self.proceed_btn) self.verticalLayout.addLayout(self.horizontalLayout) self.horizontalLayout_2 = QtGui.QHBoxLayout() self.label_10 = QtGui.QLabel(self.centralwidget) self.horizontalLayout_2.addWidget(self.label_10) self.lineEdit = QtGui.QLineEdit(self.centralwidget) self.horizontalLayout_2.addWidget(self.lineEdit) self.quit_btn = QtGui.QToolButton(self.centralwidget) self.horizontalLayout_2.addWidget(self.quit_btn) self.verticalLayout.addLayout(self.horizontalLayout_2) self.gridLayout_2.addLayout(self.verticalLayout, 0, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtGui.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 350, 25)) MainWindow.setMenuBar(self.menubar) self.statusbar = QtGui.QStatusBar(MainWindow) MainWindow.setStatusBar(self.statusbar) self.retranslate(MainWindow) self.cal = Calcul() self.i_dat = DataImport(self.f_datas) profils = ["Profils ..."] for p in self.i_dat.listmeca: profils.append(p) self.class_meca_cmb.addItems(profils) QtCore.QObject.connect(self.class_meca_cmb, QtCore.SIGNAL ("activated(int)"), self.set_profil) QtCore.QObject.connect(self.base_spb, QtCore.SIGNAL ("valueChanged(int)"), self.change_val) QtCore.QObject.connect(self.height_spb, QtCore.SIGNAL ("valueChanged(int)"), self.change_val) QtCore.QObject.connect(self.kh_btn, QtCore.SIGNAL ("clicked()"), self.kh_value) QtCore.QObject.connect(self.proceed_btn, QtCore.SIGNAL ("clicked()"), self.cal.calcul) QtCore.QObject.connect(self.quit_btn, QtCore.SIGNAL ("clicked()"), exit) def retranslate(self, MainWindow): self.title_0.setText(QtGui.QApplication.translate ("MainWindow", "Définition de l\'élément étudié :", None, QtGui.QApplication.UnicodeUTF8)) self.label_0.setText(QtGui.QApplication.translate ("MainWindow", "Classe mécanique :", None, QtGui.QApplication.UnicodeUTF8)) self.label_1.setText(QtGui.QApplication.translate ("MainWindow", "Base b :", None, QtGui.QApplication.UnicodeUTF8)) self.base_spb.setSuffix(QtGui.QApplication.translate ("MainWindow", " mm", None, QtGui.QApplication.UnicodeUTF8)) self.label_2.setText(QtGui.QApplication.translate ("MainWindow", "Hauteur h :", None, QtGui.QApplication.UnicodeUTF8)) self.height_spb.setSuffix(QtGui.QApplication.translate ("MainWindow", " mm", None, QtGui.QApplication.UnicodeUTF8)) self.label_3.setText(QtGui.QApplication.translate ("MainWindow", "ft, O, k :", None, QtGui.QApplication.UnicodeUTF8)) self.label_4.setText(QtGui.QApplication.translate ("MainWindow", "gM :", None, QtGui.QApplication.UnicodeUTF8)) self.label_5.setText(QtGui.QApplication.translate ("MainWindow", "Kh :", None, QtGui.QApplication.UnicodeUTF8)) self.title_2.setText(QtGui.QApplication.translate ("MainWindow", "Définition de l\'effort :", None, QtGui.QApplication.UnicodeUTF8)) self.label_8.setText(QtGui.QApplication.translate ("MainWindow", "N :", None, QtGui.QApplication.UnicodeUTF8)) self.N_dsb.setSuffix(QtGui.QApplication.translate ("MainWindow", " N", None, QtGui.QApplication.UnicodeUTF8)) self.label_9.setText(QtGui.QApplication.translate ("MainWindow", "kmod :", None, QtGui.QApplication.UnicodeUTF8)) self.kh_btn.setText(QtGui.QApplication.translate ("MainWindow", "kh ?", None, QtGui.QApplication.UnicodeUTF8)) self.proceed_btn.setText(QtGui.QApplication.translate ("MainWindow", "Calculer", None, QtGui.QApplication.UnicodeUTF8)) self.label_10.setText(QtGui.QApplication.translate ("MainWindow", "Taux de travail :", None, QtGui.QApplication.UnicodeUTF8)) self.quit_btn.setText(QtGui.QApplication.translate ("MainWindow", "Quitter", None, QtGui.QApplication.UnicodeUTF8)) def set_profil(self, idx): if idx: self.ft0k = eval(self.i_dat.get_value(idx, "ftok")) self.ft_lbl.setFont(self.font) self.ft_lbl.setText("".join([str(self.ft0k), " Mpa"])) self.gM = eval(self.i_dat.get_value(idx, "gM")) self.gm_lbl.setFont(self.font) self.gm_lbl.setText(str(self.gM)) self.kh_lbl.setText("") def change_val(self, v): self.base = self.base_spb.value() self.height = self.height_spb.value() self.kh_lbl.setText("") def kh_value(self): if self.gM and self.height: if self.gM == 1.3: if self.height >= 150: self.kh=1.0 elif self.height > 0 : kh = (150 / self.height) kh = pow(kh, 0.2) kh = min(1.3, kh) self.kh = (int(kh * 100)) / 100.0 else : self.statusbar.showMessage(u"Entrée incorrecte : Hauteur") return elif self.gM == 1.25: if self.height >= 600: self.kh = 1.0 elif self.height > 0 : kh = pow(600 / self.height, 0.1) kh = min(1.1, kh) self.kh = (int(kh * 100)) / 100.0 else : self.statusbar.showMessage(u"Entrée incorrecte : Hauteur") return self.kh_lbl.setFont(self.font) self.kh_lbl.setText(str(self.kh)) class DataImport(object): def __init__(self, datas): xmldoc = minidom.parse(datas) self.reflist = xmldoc.getElementsByTagName('profil') self.listmeca = [ligneX.attributes["id"].value for ligneX in self.reflist] def get_value(self, idx, d) : ligneX = self.reflist[idx] valeur_symbole = ligneX.attributes[d] valeur_symbole = valeur_symbole.value valeur_symbole = valeur_symbole.encode('utf-8') return valeur_symbole class Calcul(object): def __init__(self): pass def calcul(self): effort = float(gui.N_dsb.value()) Kmod = float(gui.kmod_dsb.value()) if effort and Kmod and gui.base and gui.height and gui.ft0k: aire = gui.base * gui.height st0d = effort / aire print type(gui.ft0k), type(gui.kh), type(Kmod), type(gui.gM) Ft0d = gui.ft0k * gui.kh * Kmod / gui.gM self.travail = (st0d / Ft0d) gui.lineEdit.setText(str(self.travail)) else: gui.statusbar.showMessage(u"Données insufisantes.") if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) MainWindow = QtGui.QMainWindow() gui = TopWindow() gui.setup_gui(MainWindow) MainWindow.show() sys.exit(app.exec_()) PyQt 4.4 ou sup. cordialement, vincent Hors ligne peterp@n Re : Idée projet Logiciel RDM Eurocodes Structures Bois, Acier, Béton... Au top cette version ! N'y aurait il pas moyen de séparer interface graphique et programme ? Je m'attaque à un petit script pour définir la charge de neige sur la toiture. Edit : Dans ton script VinsS il faut remplacer : ligne 159 self.label_3.setText(QtGui.QApplication.translate ("MainWindow", "ft, O, k :", None, par self.label_3.setText(QtGui.QApplication.translate ("MainWindow", "ft, 0, k :", None, C'est juste le "O" qu'il faut remplacer par un "0". De plus le "g" de "gM" est en réalité la lettre grec gamma. Si tu sais comment le faire apparaître ainsi, ce serait classe. Dernière modification par Peterpan12 (Le 05/05/2010, à 22:56) Ubuntu 12.04 64bits, Raspbian “wheezy”, Tango Studio sauce debian Un bureau d'études techniques pour le bâtiment avec des logiciels libre. Entreprise de construction bois. - Formations FreeCAD (logiciel 3d) Hors ligne
So the task i am trying to accomplish is for two string inputs such as '1' + '2' to return '3', I would like to be able to be able to do something like this, d = {'1': 1, '2': 2, '3': 3} So i have a dictionary like this ^, then I can do d.get('1') In hopes it would return 1, except it returns None, How would I work around this? Thank you for the help So thanks to your help i got it to work sort of, although for some reason it only accepts digits which add to 4 and lower, Here is the code so you may better understand def code_char(char, key): d = {'1': 1, '2':2 ,'3': 3 ,'4': 4,'5': 5 ,'6': 6 ,'7': 7 ,'8' :8 ,'9':9} f = {1: '1', 2: '2' ,3: '3' ,4: '4' ,'5':5 ,'6':6 ,'7':7 ,'8':8 ,'9':9} sum = d.get(char)+d.get(key) if sum < 9: print(f.get(sum)) else: sum = sum % 10 value = f.get(sum) print(value) code_char('1','5') For some reason code+char('1','3') will correctly return 3 but any higher and it will just print None. It's the start to my encrypter, thanks for the help so far!
Yes and no. The FK relationship is described at the class level, and mirrors the FK association in the database, so you can't add extra information directly in the FK parameter. Instead, I'd recommend having a string that holds the field name on the other table: class ValidationRule(models.Model): other = models.ForeignKey(OtherModel) other_field = models.CharField(max_length=256) This way, you can obtain the field with: v = ValidationRule.objects.get(id=1) field = getattr(v, v.other_field) Note that if you're using Many-to-Many fields (rather than a One-to-Many), there's built-in support for creating custom intermediary tables to hold meta data with the through option.
Mibixy Re : Live Voyager 12.10 bonjour, bonsoir, joyeux noël voilà je viens poster là parce qu'après de longues recherches, (peut-être pas au bon endroit.. pas avec les bons mots clefs je ne sais pas...) je ne trouve pas de script de connexion vpn pour les intégrer à wicd... NM fonctionne très mal chez moi. pourquoi ? je ne sais pas pas... même que ça serait une très bonne question. alors je souhaiterais garder wicd et y intégrer un script "d'après connexion" pour me connecter à un vpn pptp. j'ai regardé du coté d'autoconnect (fonctionnant avec NM) j'ai regardé sur le forum debian et en ai trouvé un, mais je n'arrive pas à le faire tourner... http://www.debian-fr.org/vpn-wicd-t29281.html je demande votre aide... merci. (à tous) Hors ligne ljere Re : Live Voyager 12.10 si tu ne nous en dis pas plus sur tes retours on va avoir du mal à t'aider Hors ligne Joalland Re : Live Voyager 12.10 Plop de nouveau. Félicitations, très très belle version d'Ubuntu ! J'ai cependant quelques questions, venant de KDE. Je trouve que certaines choses ne sont pas très visibles avec le thème choisit. Exemple : En effet dans la première image, il figure un bouton téléchargement peu visible en bas de la fênetre du logiciel Minitube et dans la seconde, il y a des flèches à coté des onglets, sur l'éditeur LateX Kile. Que puis-je faire pour remédier à cela, s'il vous plait ? Le lanceur qui affiche les dossiers dans la barre du haut et top, le fait qu'il propose d'ouvrir un terminal est pratique ! J'aimerais toutefois que lorsque je clique sur un dossier, celui s'ouvre directement, et que je n'ai pas à déplacer ma souris à droite pour cliquer sur "ouvrir le répertoire" au dessus de "ouvrir un terminal". En effet, j'ouvre plus souvent des dossiers qu'un terminal. Avez-vous une suggestion à ce propos ? Variety est vraiment top ! Je télécharge masse de fond d'écran magnifique, et avec un petit script bonjour madame c'est nettement plus fun ! Merci d'avance. Dernière modification par Joalland (Le 26/12/2012, à 18:19) Hors ligne ikaar Re : Live Voyager 12.10 Hello, qqun pourrait m'expliquer précisément comment installer un .bin et un .sh ? J'ai bien suivi les instructions dispo dans la doc ubuntu et je lance un terminal avec sudo ou sudo bash mais ça me dit tjs qu'il n'y a pas de fichier avec un tel nom, pourtant ce fichier est bien sur ma clef... Hors ligne kurapika29 Re : Live Voyager 12.10 ikaar, clic droit sur ton fichier, propriétés, permission, et cocher "autoriser ce fichier à être exécuté comme un programme" Plus qu'un double clic sur ton fichier et c'est bon Disponible sur IRC, sur le serveur irc.freenode.net salon ##ubuntu-voyager (et aussi sur plein d'autre serveur/salon) Venez si vous avec besoin d'aide ou pour causer ;) suffit d'avoir Xchat ou un autre client IRC Où sinon en cliquant sur se lien http://kiwiirc.com/client/irc.freenode. … tu-voyager Hors ligne metalux Re : Live Voyager 12.10 Bonjour à tous, Je profite de quelques jours de congés pour "scripter" un petit peu. j'ai simplement amélioré l'expérience de Voyager sans rien changé aux fonctionnalités choisies par rodofr. Enfin je n'en dis pas plus pour garder la surprise , j'attends ton retour rodofr pour te transmettre le 1er jet, il reste encore quelques bugs à corriger. J'espère que ça t'intéressera pour Voyager 13.04. A défaut de ton approbation pour intégrer ceux-ci dans la prochaine 13.04, je posterai sur le forum dès que j'ai fini pour ceux qui souhaitent en profiter si c'est possible, je ne connait rien aux licences et je ne sais pas si je peux redistribuer les scripts modifiés en dehors de Voyager. @ljere, Didier-T,etc.... Si rodofr retient ces scripts, j'aurai besoin d'un coup de main, je galère sur un bug depuis 2 jours que je n'arrive pas à résoudre et je fais appel à votre expérience. Xubuntu 14.04 LTS sur HP Pavilion t728.fr Precise Pangolin 64 bits sur Acer aspire 5738ZG Voyager 13.04 mise à niveau en 14.04 sur TOSHIBA Satellite C870-196. Faîtes la mise à jour de vos P.P.A. automatiquement Hors ligne ljere Re : Live Voyager 12.10 pas de soucis metalux tu peux poster ici ou sur le salon irc Hors ligne rodofr Re : Live Voyager 12.10 metalux@ toutes les bonnes idées sont les bienvenues sur voyager. Tu peux l'exposer ici. Après faut voir pour l'intégrer à voyager car je fais toujours attention à l’équilibre des éléments. Merci à toi bjorkenborg@super...je l'avais testé pour voyager 13.04 en remplacement de l'ancien doc. Mais problème il est trop lourd... A bientôt les ami(e)s Hors ligne jlfh0816 Re : Live Voyager 12.10 Bonjour, Bonsoir, Une petite question en passant mais dont votre réponse me ravirait: l'un d'entre-vous utilise-t-il une clé USB tuner TV pour recevoir la TNT (donc sans passer par une box) sur Voyager 12.10 ? Dans l'affirmative, laquelle fonctionne sans problème (si possible, nativement reconnue, ou sinon avec un peu de bidouillage mais pas trop ? C'est juste un conseil que je sollicite avant d'acheter une clé ... Merci d'avance de votre compréhension et bien entendu je vous souhaite à toutes et à tous une très bonne année ! modif du 08/02/2013: je me réponds à moi-même car finalement j'ai réussi à faire fonctionner correctement la cle TNT picostick 74e sous Voyager. Il suffisait en fait de passer à 12.10 (j'étais sous 11.10 à l'époque de mes premiers essais ...), de télécharger les 2 fichiers de firmware comme indiqué dans la doc Ubuntu page TNT) et la clé est automatiquement reconnue et bien fonctionnelle. Désolé d'avoir pollué ce topic pour un problème que je pouvais résoudre seul. Si ça peut servir à quelqu'un, j'en serai très heureux. Dernière modification par jlfh0816 (Le 08/02/2013, à 17:49) Hors ligne ljere Hors ligne jlfh0816 Re : Live Voyager 12.10 @ljere Merci beaucoup de ta réponse très rapide. En fait, la doc Ubuntu est la première source d'informations que j'ai consultée mais elle est m'a déjà induite en erreur (achat d'une carte PCTV picostick 74e donnée pour compatible alors qu'en fait elle ne l'est pas, le lien indiqué étant mort et donc le driver impossible à récupérer). C'est pour cela que je me permettais de solliciter l'avis des utilisateurs de la vraie vie, des utilisateurs de Voyager, afin de ne pas me planter une seconde fois. Mais merci quand même de ta réponse. Hors ligne rodofr Re : Live Voyager 12.10 InfinitelyGalactic vient de sortir le 29 une vidéo de voyager 12.10 Voyager OS 12.10 Review - Linux Distro Reviews lien ici jlfh0816@ ça va être difficile de t'aider sans clé tv. Il n'y a que les personnes en ayant une et l'ait testé sur voyager ou xubuntu pour t’aiguillonner. Moi j'en avait une mais je l'ai donné. Donc faut attendre sinon voir sur les forums us qui ont parfois plus d'infos que nous pour certaines choses. Faire une recherche ici dans google avec ceci ubuntuforums.org tuner usb ubuntu 12.10 ensuite tu auras des infos mais en anglais en fr google tuner usb ubuntu 12.10 Hors ligne metalux Re : Live Voyager 12.10 AVERTiSSEMENT Le script qui suit ne doit être utilisé qu'à titre de test et n'est pas entièrement fonctionnel seul(ne fonctionne pas avec les applis en terminal). Faites un copie du dossier ~/.scripts/Wall au préalable pour pouvoir rétablir la situation initiale si besoin. metalux@toutes les bonnes idées sont les bienvenues sur voyager. Tu peux l'exposer ici. Après faut voir pour l'intégrer à voyager car je fais toujours attention à l’équilibre des éléments. Merci à toi Le but est de rendre transparent pour l'utilisateur le changement de bureau, que ce soit par la souris, l'applet du panel haut, les raccourcis souris, etc..... la base part de ce script: #!/bin/bash # License GPL # Live Voyager rodofr@ # Script done by metalux for dynamic change wallpapers pidfile="/tmp/.pid" [[ -e $pidfile ]] || touch $pidfile ps -e | grep $(<"$pidfile") && kill $(<"$pidfile") echo $$ > "$pidfile" current_desktop=$(wmctrl -d | grep '*' | cut -d " " -f1) desktowall=$(( current_desktop+1 )) ~/.scripts/Wall/wall$desktowall while : do current_desktop=$(wmctrl -d | grep '*') a=${current_desktop%%\**} if [[ ! $a -eq $b ]];then if [[ $a -eq 0 ]];then ~/.scripts/Wall/wall1 elif [[ $a -eq 1 ]];then ~/.scripts/Wall/wall2 elif [[ $a -eq 2 ]];then ~/.scripts/Wall/wall4 elif [[ $a -eq 3 ]];then ~/.scripts/Wall/wall3 fi fi sleep 1 b=${current_desktop%%\**} done exit 0 ATTENTION, pour tester, il faut partir d'une configuration de base. Si vous avez mis de l'ordre entre les wall3 et wall4 qui sont inversés il faut corriger le script aux 2 derniers elif sinon les bureaux ne vont pas arrêter de sauter de un à l'autre lorsqu'on se rend sur le troisième ou quatrième.@rodofr Le gros du travail vient par la suite, en imbriquant la réaction de chaque script par rapport aux autres. Cela permet par exemple de garder le fond d'écran de "ranger" même si on change de bureau sans passer par le panel. Cela permet également de lancer Ranger directement même si Terminal(bouton 6) était ouvert alors qu'avant le 1er clic fermait le second ouvrait. Comme tu vois c'est plein de petits détails mais qui je crois améliore l'expérience utilisateur. Ça évite d'avoir également de multiples conky moc ouvert si on lance moc puis le ferme et le réouvre ultérieurement. L'inconvénient, ça consomme un peu de Cpu mais ça passe sans problème sur mon Voyager 12.04 et le Pc est de 2004 avec un Amd sempron 3000+. Néanmoins j'ai également fais un script pour l'ajouter ou retirer des applications au démarrage et qui permet également de désactiver cette fonction et se retrouver avec le comportement de base de Voyager 12.04 ou de régler une sensibilité d'affichage différente pour une consommation moindre. Donc, comme tu vois, ceux qui veulent l'active, ceux qui sont réfractaire aux changements le désactive.@tous Si ça vous intéresse, faites le moi savoir, je ferai une archive avec l'ensemble des scripts et une note explicative mais comme je travaille demain, maintenant ça sera pour l'année prochaine! Bonne fin d'année à tous. Dernière modification par metalux (Le 31/12/2012, à 07:56) Xubuntu 14.04 LTS sur HP Pavilion t728.fr Precise Pangolin 64 bits sur Acer aspire 5738ZG Voyager 13.04 mise à niveau en 14.04 sur TOSHIBA Satellite C870-196. Faîtes la mise à jour de vos P.P.A. automatiquement Hors ligne rodofr Re : Live Voyager 12.10 metalux@ merci, je vais tester tout ça et je te dirais. A bientôt et bonne année à tous... Dernière modification par rodofr (Le 31/12/2012, à 00:39) Hors ligne Sink-soul Re : Live Voyager 12.10 rodofr a écrit : metalux@toutes les bonnes idées sont les bienvenues sur voyager. Tu peux l'exposer ici. Après faut voir pour l'intégrer à voyager car je fais toujours attention à l’équilibre des éléments. Merci à toi Le but est de rendre transparent pour l'utilisateur le changement de bureau, que ce soit par la souris, l'applet du panel haut, les raccourcis souris, etc..... la base part de ce script:<metadata lang=Shell prob=1.00 /> #!/bin/bash # License GPL # Live Voyager rodofr@ # Script done by metalux for dynamic change wallpapers pidfile="/tmp/.pid" [[ -e $pidfile ]] || touch $pidfile ps -e | grep $(<"$pidfile") && kill $(<"$pidfile") echo $$ > "$pidfile" current_desktop=$(wmctrl -d | grep '*' | cut -d " " -f1) desktowall=$(( current_desktop+1 )) ~/.scripts/Wall/wall$desktowall while : do current_desktop=$(wmctrl -d | grep '*') a=${current_desktop%%\**} if [[ ! $a -eq $b ]];then if [[ $a -eq 0 ]];then ~/.scripts/Wall/wall1 elif [[ $a -eq 1 ]];then ~/.scripts/Wall/wall2 elif [[ $a -eq 2 ]];then ~/.scripts/Wall/wall4 elif [[ $a -eq 3 ]];then ~/.scripts/Wall/wall3 fi fi sleep 1 b=${current_desktop%%\**} done exit 0 ATTENTION, pour tester, il faut partir d'une configuration de base. Si vous avez mis de l'ordre entre les wall3 et wall4 qui sont inversés il faut corriger le script aux 2 dernierselifsinon les bureaux ne vont pas arrêter de sauter de un à l'autre lorsqu'on se rend sur le troisième ou quatrième.@rodofr Le gros du travail vient par la suite, en imbriquant la réaction de chaque script par rapport aux autres. Cela permet par exemple de garder le fond d'écran de "ranger" même si on change de bureau sans passer par le panel. Cela permet également de lancer Ranger directement même si Terminal(bouton 6) était ouvert alors qu'avant le 1er clic fermait le second ouvrait. Comme tu vois c'est plein de petits détails mais qui je crois améliore l'expérience utilisateur. Ça évite d'avoir également de multiples conky moc ouvert si on lance moc puis le ferme et le réouvre ultérieurement. L'inconvénient, ça consomme un peu de Cpu mais ça passe sans problème sur mon Voyager 12.04 et le Pc est de 2004 avec un Amd sempron 3000+. Néanmoins j'ai également fais un script pour l'ajouter ou retirer des applications au démarrage et qui permet également de désactiver cette fonction et se retrouver avec le comportement de base de Voyager 12.04 ou de régler une sensibilité d'affichage différente pour une consommation moindre. Donc, comme tu vois, ceux qui veulent l'active, ceux qui sont réfractaire aux changements le désactive.@tous Si ça vous intéresse, faites le moi savoir, je ferai une archive avec l'ensemble des scripts et une note explicative mais comme je travaille demain, maintenant ça sera pour l'année prochaine! Bonne fin d'année à tous. Bonsoir, Avoir une archive avec ton nouveau script et sans que les walls 3 et 4 soient inversés, ça m'intéresse beaucoup ! Bonnes fêtes. Dernière modification par Sink-soul (Le 31/12/2012, à 01:58) Voyager 13.10 sur HP-DV7 avec Windows 7 en Dual Boot Hors ligne metalux Re : Live Voyager 12.10 @ metalux, en fait tu as créer une sorte de demon en bash. quelle sont les bugs dont tu nous parlais ??? Si tu le dis c'est que ça doit être ça! Pour les bugs, les fonds d'écran de Ranger, terminal et Moc ne s'affiche pas correctement si un terminal est ouvert. Par contre aucun souci si on ouvre le terminal avant...comprend pas. Et au lieu de fermer le terminal (celui du panel bouton 6), ça réouvre un second au 2ème clic si un terminal classique est déjà ouvert. Mais pour pouvoir regarder tout ça, il faut d'abord que je finisse l'ensemble et que je te transmette l'intégralité des scripts. D'autant plus qu'ils sont pas forcément des plus lisibles, découvrant au fur et à mesure les commandes, j'ai utilisé plusieurs méthode différentes pour la même chose (enfin c'est comme ça que j'apprends). D'ailleurs je pense que tu pourras optimiser tout ça si tu es familier des pgrep,pidof,pid car je découvre.J'essaierai de commenter au moins un des 3 scripts(Ranger,terminal,Moc) qui posent soucis, le modèle de base étant basé sur le même principe.,mais pas sûr que j'ai le temps ces jours-ci. Cependant même si rodofr retient cette idée, on a encore le temps. En attendant, tu peux toujours donner ton avis en testant le script de base (sans utiliser les boutons ranger,moc et terminal). Xubuntu 14.04 LTS sur HP Pavilion t728.fr Precise Pangolin 64 bits sur Acer aspire 5738ZG Voyager 13.04 mise à niveau en 14.04 sur TOSHIBA Satellite C870-196. Faîtes la mise à jour de vos P.P.A. automatiquement Hors ligne jlfh0816 Re : Live Voyager 12.10 @rodofr Merci beaucoup de ta réponse, je vais faire mon enquête et en ferai profiter la communauté dès que j'aurai abouti à quelque chose de fiable. J'ai par ailleurs eu le plaisir de prendre contact avec skynet1994 qui va me donner un bon coup de main pour la PCTV picostick 74e (un grand merci à lui ! ). Je vais donc m'orienter en premier lieu sur cette carte et te tiens au courant dès que ça fonctionnera. Hors ligne Didier-T Re : Live Voyager 12.10 @ metalux, Je me suis pas mal creusé la tête, car pour un fonctionnement de type deamon il vaux mieux utiliser d'autre langage que le bash. Donc documentation sur python, et voici le résultat (consommation processeur nul) Si il y a des habitué de python dans le coin je demande un peut d'indulgence, c'est mon premier script dans ce langage bureau.py #! /usr/bin/python # -*- coding: utf-8 -*- import wnck import gtk import os ## Fonction qui modifie les fonds d'écrans au changement de Bureau def changement_de_bureau(screen, previous): ## on récupère le bureau actif actif = screen.get_active_workspace() ## on récupère son index index = screen.get_workspace_index(actif) ## on récupère son nom, pour plus tard nom = actif.get_name() if index == 0: os.system("~/.scripts/Wall/wall1") elif index == 1: os.system("~/.scripts/Wall/wall2") elif index == 2: os.system("~/.scripts/Wall/wall4") elif index ==3: os.system("~/.scripts/Wall/wall3") ## Lancement du daemon bureau = wnck.screen_get_default() bureau.connect('active-workspace-changed', changement_de_bureau) gtk.main() Dernière modification par Didier-T (Le 31/12/2012, à 19:32) Hors ligne metalux Re : Live Voyager 12.10 @Didier-T Je me suis pas mal creusé la tête Tu auras une excuses si tu as un mal de crâne demain car pour un fonctionnement de type deamon il vaux mieux utiliser d'autre langage que le bash Je ne le savais pas...mais j'aurais été de toute manière incapable d'écrire autrement qu'en bash. Donc documentation sur python, et voici le résultat Mais comment fais-tu pour apprendre si vite? Comment as-tu trouvé les modules nécessaires? Il s'agit d'une traduction du script que j'ai fais en bash à 1ère vue mais il t'as quand même fallu apprendre la syntaxe. Tu es déjà habitué avec d'autres langages? Après une approche basique au bash il y a un an, je me suis lancé en mars dernier mais tu n'imagines pas le temps passé pour progresser, le cerveau se ramoli avec l'âge . Si j'avais dû faire moi-même la traduction le script était près pour Voyager 20.10!!! Sinon joli travail cette traduction, félicitations. Par contre ce script est la base, je ne sais pas si du coup le reste fonctionnera correctement, bien que peut-être ça va faciliter les choses. Au pire le reste est plus un peaufinage de l'ensemble et ne devrais pas poser de problème particulier, d'autant plus que le script pour désactiver ou ajouter au démarrage n'est plus indispensable. J'essaierai quand même de finir au cas où il y a des idées à prendre. @rodofr Qu'en penses-tu, ça a quand même la classe! Et avec la réécriture par Didier-T, tu ne peux que l'intégrer n'est-ce pas? Même plus de consommation ram, le top quoi! Reste à finir l'intégration des boutons (freetux,moc,etc...) pour un affichage correct des Walls. Ou j'en suis, la fermeture de l'application restaure le fond d'écran du bureau par défaut. Par exemple, tu ouvres freetux, le wall qui l'accompagne s'affiche, tu le fermes (que ce soit par le panel ou la fermeture de la fenêtre), le fond d'écran du bureau 4 s'affiche...et le wall de freetux est toujours celui affiché même si on change de bureau entre temps tant que l'application n'est pas fermée. Du détail mais ça rend l'utilisation encore plus agréable. Bon Réveillon et rendez-vous en 2013. Xubuntu 14.04 LTS sur HP Pavilion t728.fr Precise Pangolin 64 bits sur Acer aspire 5738ZG Voyager 13.04 mise à niveau en 14.04 sur TOSHIBA Satellite C870-196. Faîtes la mise à jour de vos P.P.A. automatiquement Hors ligne Didier-T Re : Live Voyager 12.10 @ metalux, En fait je n'ai pas appris a programmer en python, j'ai simplement pris le temps de comprendre le fonctionnement de ce langage, après il suffit de chercher sur le net. Pour les modules, en cherchant je suis tombé la dessus. Le seul langage que je prend le temps d'apprendre est le Lua (pour les conkys). Ce script, est un lanceur, donc ont peut imaginer n'importe quoi comme fonction derrière. Reste a voir comment mettre en œuvres le reste de tes idées. Hors ligne metalux Re : Live Voyager 12.10 Pour les modules, en cherchant je suis tombé la dessus. Effectivement, ça aide beaucoup. Reste a voir comment mettre en œuvres le reste de tes idées. Testé rapidement sans modifs, les 1er scripts passent sans problème, finalment il n'y a que les liens vers l'ancien script qu'il faut associer au nouveau. Après ce n'est que ma façon de voir les choses, pas sûr que toutes les modifs que j'ai faites soit pertinentes et correspondent au comportement souhaité par la plupart. Mais je propose après à rodofr de retenir les meilleures idées de nous tous. En attendant c'est un excellent exercice d'apprentissage et je prends plaisir à le faire, c'est déjà ça. Bon,allez, je repars avant qu'on dise que je suis asocial. Xubuntu 14.04 LTS sur HP Pavilion t728.fr Precise Pangolin 64 bits sur Acer aspire 5738ZG Voyager 13.04 mise à niveau en 14.04 sur TOSHIBA Satellite C870-196. Faîtes la mise à jour de vos P.P.A. automatiquement Hors ligne
In almost all 2D platformers I've played, your avatar always starts off on the left side of the world, and continues on to the right. Is there something designers gain by doing this? I'm not sure if there's any technical reason, but a large portion of the world reads from left to right (and many early games were made in the portions that do, i.e. Pitfall!). So starting left and going right may seem more natural. Once enough of the early games did this, it became standard practice. Additionally, it could be from the roots of design. If these games were first designed with paper and pencils, it would feel more natural to go from left to right, as that's the way the designers were used to writing. So I suppose by continuing to do this, developers do gain something. Player familiarity. The likely history behind this would go as follows: In summation, we can give thanks to the Greeks for their contribution to modern gaming. It might be worth looking at film theory for an explanation. Cinematographers within Europe and the US often work under the assumption that left-to-right movement suggests power, whereas right-to-left movement suggests tension, something to fear, or awkwardness. Game designers most likely unconsciously adopted the conventions of movement suggested from other media, including print and film. The hero character generally moves left to right, facing enemies and conflict moving counter to that direction; if the hero has to retreat, moving back toward the left is awkward. It's certainly possible that some consideration was made as to which direction was the most natural, but I know that even preceding the Super Mario era, it was pretty common to see left to right movement as the starting point (Donkey Kong, Pac Man's starting direction, Defender, Parsec, etc). As for cultural context, one of my college professors in Japanology suggested that Japanese and Chinese film convention actually adopted the opposite convention from what I described above, though I'm not sure I see it in practice. It's possible that Hebrew, Farsi and Arabic films have the opposite convention from the US and Europe, though I'm not sure I've seen enough Arabic language films to notice. Lazy references (not quite formal reference material, but at least corroborating): Maybe it is just the natural way to implement it: Assume you start at position x = 0 and since x coordinates of pixels normally increase from left to right on a display the end of a platformers level is on the right side... The hardware on computers and console machines addressed the screen pixels from left to right. Just like writing or mathematics. Simply look at a graph: Typically the natural thing to do is to increase the position of a character from 0 to higher positive numbers. I don't really think so. It's probably just because Super Mario Bros. 1 did it, and everyone followed suit. I have heard somewhere, however, that things to the right look comfortable and thinks to the left and weirder looking. So maybe if everything scrolled to he left it would just be too weird. It all derives from the western writing system most people are used to, it reads and write from the left to the right and due to the amount of text nowadays this order is linked in our mind, from sketches, to coordinate systems, to visual flow. After a time it's perceived as natural flow, even though left and right actually plays almost no difference in nature. This of course finds itself into technology and everything else human made, such as games. The obvious answer is that Western people have a left to right and top to bottom reading system, which is learned very early on in our childhood. There is also no physical reason why the Cathode Ray Tube (CRT) writes from left to right and top to bottom, other than that this came most natural to the western scientists which developed it. The same holds true for how The interesting part is that, this There was an interesting publication on how different cultures scan and recognize images and their contents, measured through iris trackers. (I am not up to date, and this is not my field of expertise, so I am sure there is much more to be discovered on this topic) I think the convention goes back a lot further than people have alluded to. On a race track where contestants (people, horses, chariots, or whatever) orbit in a counter-clockwise direction, the contestants who are the side of the track nearer to any particular spectator will be moving left to right. Many artworks depicting races thus show contestants racing from left to right. I'm not sure that chariots were raced counter-clockwise 2,000 years ago, but that certainly is the common direction today and I have no particular reason to believe that it has changed over the years. While I don't know that right-to-left motion suggests "tension", narrative films generally use the left-to-right direction of motion to suggest the first forward progress, and use right-to-left motion to suggest movement in the other physical direction (films will generally avoid reversing the physical directions associated with screen directions unless a shot is included where the the camera crosses the "imaginary line" which establishes the directions). Im pretty sure this is a convention that was made implicitly by developers along the years. Its all about how we learn to think when we are little, while people in oriental countries usually learn to read and write right to left, and are able to think that way for other things, people from the western part of the world are usually tought to think of the concepts from left to right, and only that way everything makes sense. With games, probably it was just a matter of sticking with one of the ways, for the same reason, for example, that space is the jump key very often, its all about intuition at the end. It would be very hard to individuals to be good players in both settings, or at least as good as they would be if they learnt only one. That said, if the platformers happened to be in the opposite direction since the beggining , im sure everyone would find it natural. Early computers came from a number of places, all of which preferred left to right writing. Apple (from the Homebrew Computer Club in California, USA). Timex Sinclair (from Brittan) VIC 20 / PET / Commodore 64 (from Canada) IBM PC, 5150 (from the USA, designer from Boca Raton, Florida) TRS 80 (from Tandy Computers, Fort Worth, Texas) Other countries did have early computing efforts; however, they typically By the time that right to left processing became a real player, the chips, graphics cards, memory ordering, disk layout, and other items were firmly entrenched in left to right ordering. It doesn't At the software layer, bidi text processing and other technologies emulate non-left to right processing on top of systems that are oriented with left to right assumptions. If you feel very strongly about the screen position, you are free to write a software layer above the graphics positioning like so... void drawPixel(int x, int y) { realDrawPixel(screenWidth - x - 1, screenHeight - y - 1); // the "extra" minus one is to ensure that pixel 0 is on the screen } Not all platformers do go left to right, but one reason most do many do may have to do with control orientation. Nearly all controllers have the joystick/d-pad on the left, and other control buttons on the right, and it feels more natural to have one's thumb nearer the middle of the controller then to have it hanging off the edge. The only real counterexample I can think of is the N64 controller, which had the joystick in the middle. But even that had its d-pad on the left, and by the time more symmetrical controllers came into use, it was already an ingrained habit. Note that there are some games that go right-to-left, for example, Final Fantasy has had the players on the right in battles for a long time (although FF has never been a platformer). Nearly all right-to-left platform levels have been ones based on either sneaking/infiltration or returning from wherever you went in previous left-to-right levels. Metroid has its share of going right-to-left: Not only is the first and easiest-to-get item left of the start, but reaching the final boss is a right-to-left journey, all the way to the escape! The retold story Zero Mission "corrects" this by adding more to the story with one more final boss, the second escape being left-to-right. Metroid II (Return of Samus) and Super Metroid both have their final bosses to the left on the map, but that means their escapes are left-to-right. That is, from the boss close to the map's far-left, to the gunship closer to the middle. In Super Metroid, the gunship is still noticeably far left of center. The Mega Man Zero series is as much left-to-right as any other series, but it has its many exceptions. The first Zero game has its right-left-right missions in the desert (Find Shuttle) and Underground Lab (Retrieve Data). All other desert missions are purely right-to-left, starting in the desert by the Resistance Base and ending either in the desert far left or in the ocean cave with the hidden base. The mission to save the Resistance Base from being bulldozed is another right-to-left. In Zero 2's boss fight in the second-to-final stage, Zero is being chased right-to-left.
I’m a hacker, and I love to build stuff for the Web. Wednesday 3rd February, 2010 A year and two blogs ago, I wrote about the way I prefer to organize my Django projects and deployments. Since then, I’ve refined my approach, taking into account the experience I’ve accrued, and the changing nature of Django itself. In this blog post, I’ll share with you what I’ve found to be incredibly valuable techniques. The first step is to divide the Django deployment into two halves: a siteroot and a project root. The site root consists of a virtualenv at thetop level, and sub-directories for SQLite databases, temporary files, andseveral other files. A short description of each sub-directory is given below: SITE_ROOT/ |-- bin/ # Part of the virtualenv |-- cache/ # A filesystem-based cache |-- db/ # Store SQLite files in here (during development) |-- include/ # Part of the virtualenv |-- lib/ # Part of the virtualenv |-- log/ # Log files |-- pid/ # PID files |-- share/ # Part of the virtualenv |-- sock/ # UNIX socket files |-- tmp/ # Temporary files `-- uploads/ # Site uploads Obviously each of these components is optional, and will vary based on the requirements of the application. This listing represents the most you will usually ever need. The site root functions as a container for a specific deployment. All thefiles it contains are those that will vary from deployment to deployment, andmost importantly will change during the running of the site—log files, user uploads, filesystem-based caches and PID files all fall into this category. Forthis reason, the site root is not checked into version control. Conversely, the project root contains those files which remain immutablewhile the site is running. This means code, static media and configurationfiles. As a result, this directory is checked into version control. Youshould also make the project root a sub-directory of the site root; this willkeep everything in one place. The project directory will look like this: PROJECT_ROOT/ |-- apps/ # Site-specific Django apps |-- etc/ # A symlink to an `etcs/` sub-directory |-- etcs/ # Assorted plain-text configuration files |-- libs/ # Site-specific Python libs |-- media/ # Static site media (images, stylesheets, JavaScript) |-- settings/ # Settings directory |-- templates/ # Site-wide Django templates |-- .hgignore # VCS ignore file (can be .gitignore, .cvsignore, etc) |-- README # Instructions/assistance for other developers/admins |-- REQUIREMENTS # pip dependencies file |-- __init__.py # Makes the project root a Python package `-- urls.py # Root URLconf There are a few key things to notice: The apps/ directory contains individual Django apps that are specific tothis project. Anything generic should be specified in the REQUIREMENTSfile. The libs/ directory contains individual Python libraries—again, only thosespecific to this project. If a particular library deals with only one app,it should go into that app; libs/ is for the general-purpose tools thatdon’t fit inside a single app, but aren’t generic enough to be a separaterequirement or dependency. The etcs/ directory should contain a sub-directory for each deployment—soyou might have etcs/development/, etcs/staging/ and etcs/production/.etc/ is a symlink to one of these, and should be ignored by the VCS. Theseplain-text configuration directories might contain web server configs(lighttpd.conf, nginx.conf), Supervisor configs(supervisord.conf), and so on. REQUIREMENTS should be a pip requirements file.manage.py file. It turns out you can just use django-admin.py fromwithin the project root (although I use django-boss). As anadded bonus, these will both actually DJANGO_SETTINGS_MODULE environment variable, which is really important, asyou’ll see in a bit.settings/ in just a hot second. Managing settings across various deployments has been something of a standing problem in Django for a while now. It’s not one that can be fixed by a change to Django itself—in fact, Django is incredibly flexible when it comes to specifying settings, which has given developers room to experiment with a range of solutions. I think I’ve found one which works pretty well. Begin by breaking your settings into two groups: common settings, anddeployment-specific settings. Common settings are defined in asettings.common sub-module, and may include: Defaults for settings like DEBUG, ADMINS, MANAGERS andCACHE_MIDDLEWARE_SECONDS. If not subsequently overridden, the valuesspecified here will be used, so they should provide sensible defaults. App installation and setup: INSTALLED_APPS, MIDDLEWARE_CLASSES,ROOT_URLCONF and TEMPLATE_CONTEXT_PROCESSORS all fall under thiscategory. A basic logging setup. It’s good to define, but not install, several handlers—these can optionally be added to the root logger in the deployment-specific configuration. For example, in my common settings, I always do: import logging LOG_DATE_FORMAT = '%d %b %Y %H:%M:%S' LOG_FORMATTER = logging.Formatter( u'%(asctime)s | %(levelname)-7s | %(name)s | %(message)s', datefmt=LOG_DATE_FORMAT) CONSOLE_HANDLER = logging.StreamHandler() # defaults to stderr CONSOLE_HANDLER.setFormatter(LOG_FORMATTER) CONSOLE_HANDLER.setLevel(logging.DEBUG) In my development settings, I add CONSOLE_HANDLER to logging.root; inproduction, however, I use a file handler. Daniel Bruce kindly pointed out that if you’re using i18n in your project, you need to set LOCALE_PATHS = (PROJECT_ROOT / 'locale',) (a tuple). This is because Django’s translation machinery uses settings_mod.__file__ to find translations by default, and so is incompatible with package-based settings modules. Deployment-specific settings, in another settings.DEPLOYMENT_NAME sub-module, could consist of: DEBUG and TEMPLATE_DEBUGDATABASE_*CACHE_BACKENDEMAIL_HOST and EMAIL_PORTTIME_ZONESITE_IDlogging.root.setLevel()). After you’ve sorted out your settings, the settings/ directory should looksomething like this: settings/ |-- __init__.py # Empty; makes this a Python package |-- common.py # All the common settings are defined here |-- development.py # Settings for development |-- production.py # Settings for production `-- staging.py # Settings for staging To get these settings working, you just need to put the following at the top of each deployment-specific settings file: from common import * You’ll also need to set the DJANGO_SETTINGS_MODULE environment variable. From the PROJECT_ROOT: $ export DJANGO_SETTINGS_MODULE=settings.development # Mutatis mutandem $ echo "!!" >> ../bin/activate The last line will append the previous command to SITE_ROOT/bin/activate, so that every time you activate the virtualenv you’ll set the necessary variables. apps/ and libs/ In order to be able to import stuff from the apps/ and libs/ directories,you’ll need to add them to the module search path. Fortunately, this couldn’t besimpler: you’ll need to have the following in settings/common.py: import sys from path import path PROJECT_ROOT = path(__file__).abspath().dirname().dirname() SITE_ROOT = PROJECT_ROOT.dirname() sys.path.append(SITE_ROOT) sys.path.append(PROJECT_ROOT / 'apps') sys.path.append(PROJECT_ROOT / 'libs') This code must be executed before Django attempts to import any of the installedapps or libraries. This is the reason why I recommend putting it in settings/common.py. See below for more information on path.path. path Jason Orendorff’s path module is so helpful in building yoursettings files. Whether you’re dynamically computing settings like MEDIA_ROOT,or just dealing with files and directories in general, this module makes thingsinfinitely cleaner and easier; you can do away with massively nested calls likeos.path.join(os.path.dirname(os.path.abspath(...))). I’ve been using it forages; I even blogged about it several months ago. Now, a simple easy_install won’t work due to Jason’s website being down.However, you can manually fetch it by putting the following line inREQUIREMENTS: http://pypi.python.org/packages/source/p/path.py/path-2.2.zip It will, however, raise a deprecation warning when you import it. Just use thefollowing trick to avoid that when you import it the first time (usually at thetop of settings/common.py): import warnings; warnings.simplefilter("ignore") from path import path You can then reap the rewards like so: ## Directories # Call `dirname()` twice because we’re at `PROJECT_ROOT/settings/xxx.py` PROJECT_ROOT = path(__file__).abspath().dirname().dirname() SITE_ROOT = PROJECT_ROOT.dirname() MEDIA_ROOT = PROJECT_ROOT / 'media' ## Logging FILE_HANDLER = logging.FileHandler(SITE_ROOT / 'log' / 'django.log') ## Database Setup DATABASE_ENGINE = 'sqlite3' DATABASE_NAME = SITE_ROOT / 'db' / 'development.sqlite3' path.path is a subclass of the built-in unicode, so it can be used anywherea string or unicode object would work. This is the one big aspect I’m not settled on. I’ve run Django under Apache andmod_wsgi; I’ve also used lighttpd talking to Django via FastCGI over amultiplexed UNIX socket, managed by Supervisor. I’m eager to givenginx a whirl, too. I do know that Supervisor was a really awesome find. It’s essentially a toolwhich allows you to manage long-running processes and groups of processes, witha powerful Python extension mechanism, an XML-RPC API, and a command-line clientfor controlling processes. Configurations are written in a very basic INI-stylesyntax (actually, using Python’s ConfigParser), so they’re alot easier to set up than an init.d file. I’d definitely recommend itif you want to be able to stop, start and monitor your server with ease. Another fantasy of mine is to run Django completely under an asynchronous server. Recent technologies like Tornado, eventlet and gevent, as well as (not so recently) Twisted, have proven that using asynchronous networking can result in tremendous performance boosts. Unfortunately, since most database client libraries remain incompatible with async-I/O, that makes running Django asynchronously very difficult indeed. It would require either: Writing a new database engine backend around a pure-Python DB client library, such as MySQL Connector/Python; or Uninstalling all apps which use the Django ORM, and building a completely new suite of replacement apps which leverage async-I/O. Neither of these seems very palatable right now, so for now I’ll stick with some simple reverse-proxy-based load-balancing across a few processes, and hope the box stays up. Besides, if I ever need to use async-I/O for performance reasons, I probably won’t be using Django to handle requests. When you’ve finally finished setting up your project, it should look something like the following: SITE_ROOT |-- PROJECT_ROOT/ | |-- apps/ | |-- etc/ | |-- etcs/ | |-- libs/ | |-- media/ | |-- settings/ | | |-- __init__.py | | |-- common.py | | |-- development.py | | |-- production.py | | `-- staging.py | |-- templates/ | |-- README | |-- REQUIREMENTS | |-- __init__.py | `-- urls.py |-- bin/ |-- db/ |-- include/ |-- lib/ |-- pid/ |-- share/ |-- sock/ |-- tmp/ `-- uploads/ That’s pretty much it for now. As always, comments, suggestions and criticisms are appreciated.
Despite the many stories that I've heard about people having problems installing numpy, scipy, and matplotlib on Mac OS X Lion, I've never had any problem until today. I recently updated my system and attempted to install the latest versions of NumPy and SciPy. The NumPy installation went fine and the test ran as expected, however the scipy installation seems to be incomplete. Every time I try to import scipy.stats I get the following error: In [1]: import scipy.stats --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-b66176eb2d0a> in <module>() ----> 1 import scipy.stats /Library/Python/2.7/site-packages/scipy/stats/__init__.py in <module>() 326 """ 327 --> 328 from stats import * 329 from distributions import * 330 from rv import * /Library/Python/2.7/site-packages/scipy/stats/stats.py in <module>() 191 # Scipy imports. 192 from numpy import array, asarray, dot, ma, zeros, sum --> 193 import scipy.special as special 194 import scipy.linalg as linalg 195 import numpy as np /Library/Python/2.7/site-packages/scipy/special/__init__.py in <module>() 525 """ 526 --> 527 from _ufuncs import * 528 from _ufuncs_cxx import * 529 ImportError: dlopen(/Library/Python/2.7/site-packages/scipy/special/_ufuncs.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/scipy/special/_ufuncs.so: mach-o, but wrong architecture I am using the latest version of numpy and scipy off of github. For some reason it looks like the x86_64 version of the _ufuncs.so isn't being built. I've tried every compiler flag I can think of ARCHFLAGS="-arch i386 -arch x86_64" LDFLAGS="-arch i386 -arch x86_64" FFLAGS="-m64 -ff2c" and no mater what I do I get the same error. Any advice? UPDATESo I think that I've found the problem, I will follow-up on the scipy distribution list: Most of the libraries created when scipy builds are universal files meaning that they support both i386 and x86_64. The problem is that the files compiled with gfortran are compiled as i386 only. > find . -name *.so | xargs -I {} lipo -info {} Architectures in the fat file: ./build/lib.macosx-10.7-intel-2.7/scipy/cluster/_hierarchy_wrap.so are: i386 x86_64 Architectures in the fat file: ./build/lib.macosx-10.7-intel-2.7/scipy/cluster/_vq.so are: i386 x86_64 Non-fat file: ./build/lib.macosx-10.7-intel-2.7/scipy/fftpack/_fftpack.so is architecture: i386 I've checked my environment and I don't see anything suspicious. As specified on the SciPy Mac OS X page. I only export: CC=gcc-4.2 CXX=g++-4.2 FFLAGS=-ff2c I just did install on another system and everything worked just fine.
I am stuck trying to get a python based webserver to work. I want to do Basic Authentication (sending a 401 header) and authenticating against a list of users. I have no trouble sending the 401 response with "WWW-Authorize" header, I can validate the users response (base64 encoded username & password), however, the login box keeps popping up after successful validation. import SimpleHTTPServer import SocketServer from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class Handler(BaseHTTPRequestHandler): ''' Main class to present webpages and authentication. ''' def do_HEAD(self): print "send header" self.send_response(401) self.send_header('WWW-Authenticate', 'Basic realm=\"Test\"') self.send_header('Content-type', 'text/html') self.end_headers() def do_GET(self): ''' Present frontpage with user authentication. ''' self.do_HEAD() if self.headers.getheader('Authorization') == None: self.wfile.write('no auth header received') pass elif self.headers.getheader('Authorization') == 'Basic dGVzdDp0ZXN0': self.wfile.write(self.headers.getheader('Authorization')) self.wfile.write('authenticated!') pass else: self.wfile.write(self.headers.getheader('Authorization')) self.wfile.write('not authenticated') pass httpd = SocketServer.TCPServer(("", 10001), Handler) httpd.serve_forever() if __name__ == '__main__': main() On first load (http://localhost:10001) the loginbox pops up, I enter test, test (the correct user) user is validated ok, but box pops back up, if I click cancel, I get to the validated page... Can anyone lend a hand here? I suspect it has something to do with the fact that authorization happens under do_GET, which is triggered everytime a page loads.
Package portage source code _get_stdin() Buggy code in python's multiprocessing/process.py closes sys.stdin and reassigns it to open(os.devnull), but fails to update the corresponding __stdin__ reference. source code _movefile(src, dest, **kwargs) Calls movefile and raises a PortageException if an error occurs. source code _native_string(s, encoding=u'utf_8', errors=u'backslashreplace') source code _shell_quote(s) Quote a string in double-quotes and use backslashes to escape any backslashes, double-quotes, dollar signs, or backquotes in the string. source code _unicode_decode(s, encoding=u'utf_8', errors=u'replace') source code _unicode_encode(s, encoding=u'utf_8', errors=u'backslashreplace') source code str abssymlink(symlink, target=None) This reads symlinks, resolving the relative symlinks, and returning the absolute. source code create_trees(config_root=None, target_root=None, trees=None, env=None, eprefix=None) source code getcwd() this fixes situations where the current directory doesn't exist source code VERSION = u'2.2.7' __package__ = 'portage' _bin_path = u'/var/tmp/portage/sys-apps/portage-2.2.7/work/por... _comment_or_blank_line = re.compile(r'^\s*(#.*)?$') _deprecated_eapis = frozenset([u'3_pre1', u'3_pre2', u'4_pre1'... _doebuild_manifest_exempt_depend = 0 _encodings = {u'content': u'utf_8', u'fs': u'utf_8', u'merge':... _internal_caller = False _legacy_global_var_names = (u'archlist', u'db', u'features', u... _legacy_globals_constructed = set(['archlist', 'db', 'features... _os_overrides = {4147133228: <built-in function system>, 41471... _pms_eapi_re = re.compile(r'^[ \t]*EAPI=([\'"]?)([A-Za-z0-9\+_... _pym_path = u'/var/tmp/portage/sys-apps/portage-2.2.7/work/por... _python_interpreter = u'/usr/bin/python2.7' _selinux = None hash(x) _selinux_merge = None hash(x) _shell_quote_re = re.compile(r'[\s><=\*\\"\'\$`]') _supported_eapis = frozenset(['0', '1', '2', '3', u'3_pre1', u... _sync_disabled_warnings = False _testing_eapis = frozenset([u'4-python', u'4-slot-abi', u'5-hd... _working_copy = False archlist = [u'ppc', u'~ppc', u'sparc64-freebsd', u'~sparc64-fr... auxdbkeylen = 22 auxdbkeys = (u'DEPEND', u'RDEPEND', u'SLOT', u'SRC_URI', u'RES... bold = <portage.output.create_color_func object at 0x8b0483c> bsd_chflags = None hash(x) db = {u'/': {u'virtuals': None, u'bintree': None, u'vartree': ... endversion = {u'alpha': -4, u'beta': -3, u'p': 0, u'pre': -2, ... endversion_keys = [u'pre', u'p', u'alpha', u'beta', u'rc'] features = <portage.package.ebuild._config.features_set.featur... groups = [u'x86'] lchown = portage._unicode_func_wrapper(lchown) mtimedb = {u'info': {u'/usr/share/gcc-data/i686-pc-linux-gnu/4... mtimedbfile = u'/var/cache/edb/mtimedb' ostype = 'Linux' pickle_write = None hash(x) pkglines = (u'*virtual/dev-manager', u'*virtual/libc', u'*sys-... portage_gid = 250 portage_uid = 250 portdb = <portage.dbapi.porttree.portdbapi object at 0x8f6888c> prelink_capable = 1 profiledir = u'/etc/portage/make.profile' root = u'/' secpass = 1 selinux = None hash(x) selinux_enabled = 0 settings = <portage.package.ebuild.config.config object at 0x8... thirdpartymirrors = {u'3dgamers': [u'ftp://ftp.planetmirror.co... uid = 250 userland = 'GNU' userpriv_groups = [250] wheelgid = 10 x = 4 Imports: elog, BASH_BINARY, CACHE_PATH, CONFIG_MEMORY_FILE, CUSTOM_MIRRORS_FILE, CUSTOM_PROFILE_PATH, CacheError, DEPCACHE_PATH, DEPRECATED_PROFILE_FILE, EAPI, EBUILD_SH_BINARY, EBUILD_SH_ENV_FILE, ExtractKernelVersion, FetchlistDict, INCREMENTALS, INVALID_ENV_FILE, LOCALE_DATA_PATH, MAKE_CONF_FILE, MAKE_DEFAULTS_FILE, MISC_SH_BINARY, MODULES_FILE_PATH, MOVE_BINARY, Manifest, MtimeDB, OrderedDict, PORTAGE_BASE_PATH, PORTAGE_BIN_PATH, PORTAGE_PYM_PATH, PRELINK_BINARY, PRIVATE_PATH, PROFILE_PATH, REPO_NAME_FILE, REPO_NAME_LOC, SANDBOX_BINARY, USER_CONFIG_PATH, USER_VIRTUALS_FILE, VDB_PATH, WORLD_FILE, _global_updates, _legacy_globals, _os, _os_merge, _sets, _shutil, apply_recursive_permissions, apply_secpass_permissions, atexit_register, atomic_ofstream, autouse, best, best_from_dict, best_match_to_list, binarytree, bindbapi, cache, cacheddir, catpkgsplit, catsplit, check_config_instance, checksum, close_portdbapi_caches, colorize, config, const, cpv_expand, cpv_getkey, cvstree, data, dbapi, dblink, debug, dep, dep_check, dep_eval, dep_expand, dep_getcpv, dep_getkey, dep_transform, dep_wordreduce, dep_zapdeps, deprecated_profile_check, digestcheck, digestgen, digraph, doebuild, doebuild_environment, dump_traceback, eapi, eclass_cache, emaint, env, env_update, errno, exception, fakedbapi, fetch, fixdbentries, flatten, getCPFromCPV, get_operator, getbinpkg, getconfig, getmaskingreason, getmaskingstatus, grab_updates, grabdict, grabdict_package, grabfile, grabfile_package, isjustname, isspecific, isvalidatom, listdir, localization, lockdir, lockfile, locks, mail, manifest, map_dictlist_vals, match_from_list, match_to_list, merge, movefile, new_protect_filename, news, normalize_path, os, output, package, parse_updates, perform_checksum, perform_md5, pickle_read, pkgcmp, pkgsplit, platform, portagetree, portdbapi, prepare_build_dirs, process, proxy, re, repository, run_exitfuncs, shutil, spawn, spawnebuild, stack_dictlist, stack_dicts, stack_lists, subprocess, sys, time, types, unique_array, unlockdir, unlockfile, unmerge, update, update_config_files, update_dbentries, update_dbentry, util, vardbapi, varexpand, vartree, vercmp, versions, ververify, write_atomic, writedict, writemsg, writemsg_stdout, xml, xpak This deletes the ObjectProxy instances that are used for lazy initialization of legacy global variables. The purpose of deleting them is to prevent new code from referencing these deprecated variables. Buggy code in python's multiprocessing/process.py closes sys.stdin and reassigns it to open(os.devnull), but fails to update the corresponding __stdin__ reference. So, detect that case and handle it appropriately. This reads symlinks, resolving the relative symlinks, and returning the absolute. Parameters: symlink - path of symlink (must be absolute) target - the target of the symlink (as returned by readlink) Returns: str the absolute path of the symlink target _bin_path Value: u'/var/tmp/portage/sys-apps/portage-2.2.7/work/portage-2.2.7/bin' _deprecated_eapis Value: frozenset([u'3_pre1', u'3_pre2', u'4_pre1', u'5_pre1', u'5_pre2']) _encodings Value: {u'content': u'utf_8', u'fs': u'utf_8', u'merge': u'utf_8', u'repo.content': u'utf_8', u'stdio': u'utf_8'} _legacy_global_var_names Value: (u'archlist', u'db', u'features', u'groups', u'mtimedb', u'mtimedbfile', u'pkglines', u'portdb', ... _legacy_globals_constructed Value: set(['archlist', 'db', 'features', 'groups', 'mtimedb', 'mtimedbfile', 'pkglines', u'portdb', ... _os_overrides Value: {4147133228: <built-in function system>, 4147138732: <built-in function popen>, 4147140268: <built-in function read>, 4147140396: <built-in function fdopen>, 4147140556: <built-in function mkfifo>, 4147141548: <built-in function statvfs>} _pms_eapi_re Value: re.compile(r'^[ \t]*EAPI=([\'"]?)([A-Za-z0-9\+_\.-]*)\1[ \t]*([ \t]#.* )?$') _pym_path Value: u'/var/tmp/portage/sys-apps/portage-2.2.7/work/portage-2.2.7/pym' _supported_eapis Value: frozenset(['0', '1', '2', '3', u'3_pre1', u'3_pre2', '4', u'4-python', ... _testing_eapis Value: frozenset([u'4-python', u'4-slot-abi', u'5-hdepend', u'5-progress']) archlist Value: [u'ppc', u'~ppc', u'sparc64-freebsd', u'~sparc64-freebsd', u'ppc-openbsd', u'~ppc-openbsd', u'x86-openbsd', u'~x86-openbsd', ... auxdbkeys Value: (u'DEPEND', u'RDEPEND', u'SLOT', u'SRC_URI', u'RESTRICT', u'HOMEPAGE', u'LICENSE', u'DESCRIPTION', ... db Value: {u'/': {u'virtuals': None, u'bintree': None, u'vartree': <portage.dbap i.vartree.vartree object at 0x8750dcc>, u'porttree': <portage.dbapi.po rttree.portagetree object at 0x8f7660c>}} endversion Value: {u'alpha': -4, u'beta': -3, u'p': 0, u'pre': -2, u'rc': -1} features Value: <portage.package.ebuild._config.features_set.features_set object at 0x 8bcdfac> mtimedb Value: {u'info': {u'/usr/share/gcc-data/i686-pc-linux-gnu/4.6.3/info': 137833 0030, u'/usr/share/info': 1377482245, u'/usr/share/binutils-data/i686- pc-linux-gnu/2.22/info': 1369741372, u'/usr/share/binutils-data/i686-p c-linux-gnu/2.23.1/info': 1375588757}, u'resume_backup': {u'myopts': { u'--complete-graph-if-new-use': u'n', u'--jobs': 4, u'--verbose': True , u'--autounmask-unrestricted-atoms': True, u'--load-average': 4.0, u' --oneshot': True, u'--package-moves': u'n', u'--ask-enter-invalid': Tr ue, u'--complete-graph-if-new-ver': u'n', u'--with-bdeps': u'y', u'--n ... pkglines Value: (u'*virtual/dev-manager', u'*virtual/libc', u'*sys-apps/less', u'*app-shells/bash', u'*sys-apps/texinfo', u'*sys-devel/make', u'*virtual/service-manager', u'*sys-apps/sed', ... settings Value: <portage.package.ebuild.config.config object at 0x89568ac> thirdpartymirrors Value: {u'3dgamers': [u'ftp://ftp.planetmirror.com/pub/3dgamers/games/'], u'alsaproject': [u'ftp://ftp.alsa-project.org/pub', u'ftp://ftp.task.gda.pl/pub/linux/misc/alsa/', u'ftp://gd.tuwien.ac.at/opsys/linux/alsa/', u'ftp://ftp.iasi.roedu.net/pub/mirrors/ftp.alsa-proj ect.org/', u'http://dl.ambiweb.de/mirrors/ftp.alsa-project.org/ ', ...
FelixP [Résolu] ! Script pour noms de musique Salut à tous ! J'ai une petite question à vous poser… (Eh oui !) Je cherche un script pour me créer un fichier avec la liste des noms des musiques qui sont dans un dossier donné, avec la syntaxe de Wikipédia (histoire de remplir ses serveurs de données !) en sachant que les noms de mes fichiers sont du type {piste} {Titre à Ressortir}.{mp3, flac} Je mets la syntaxe que j'aimerais récupérer : {{Album |Titre = [[Captain Morgan's Revenge]] |Année = 2008 |Label = [[Napalm Records]] |Contenu = #Over the Seas #Captain Morgan's Revenge #The Huntmaster #Nancy the Tavern Wench #Death Before the Mast #Terror on the High Seas #Set Sail and Conquer #Of Treasure #Wenches and Mead #Flower of Scotland }} ou plus simplement : {{Album|Titre=[[When Dream and Day Unite]]|Année=1989|Contenu= # A Fortune In Lies (5:12) # Status Seeker (4:18) # The Ytse Jam (5:43) # The Killing Hand (8:42) ## The Observance ## Ancient Renewal ## The Stray Seed ## Thorus ## Exodus # Light Fuse And Get Away (7:23) # Afterlife (5:27) # The Ones Who Help To Set The Sun (3:07) # Only A Matter Of Time (6:38) }} Et même (mais je n'en demande pas tant) si vous savez comment récupérer direct la durée d'une musique… Je suis preneur !!! Dernière modification par FelixP (Le 09/10/2012, à 22:55) Hors ligne nesthib Re : [Résolu] ! Script pour noms de musique Je n'ai pas de solution toute faite à te proposer, mais je te suggère d'utiliser python. Tu as de nombreux modules pour manipuler les métadonnées audio. Par exemple mutagen. Regarde les exemples donnés, tu as entre autres comment récupérer la durée de ta musique. L'avantage d'utiliser cette approche est que tu as aussi des modules pour manipuler wikipédia. Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Okay, merci des conseils je vois ça de suite ! Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique hum, je suis un peu novice… -_- Porrais-tu me guider ? J'ai déjà mis tous les p'tis fichiers *.py dans un dossier mutagen… Hors ligne nesthib Re : [Résolu] ! Script pour noms de musique Alors, je ne sais pas de quel fichiers tu parles. Pour installer mutagen : sudo apt-get install python-mutagen ensuite il te faut créer un script python : #!/usr/bin/env python #coding: utf-8 from mutagen.mp3 import MP3 audio = MP3("ton_morceau.mp3") print("le morceau fait %s secondes " % audio.info.length) lis le tutoriel mutagen. Ensuite, pour parcourir tes dossiers, tu peux utiliser le module os. Ensuite tu pourras lire ton arborescence, extraire les tags de chaque morceau et à la fin, faire un affichage selon ta syntaxe (ça ne sera pas le plus dur). Essaie d'écrire quelque chose et poste-le ici. Dernière modification par nesthib (Le 08/10/2012, à 20:41) Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Au temps pour moi ! j'allais l'installer des sources disponibles sur le lien que tu m'avair fournis… Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Alors là je lis le tuto… et j'essaye quelques exemples ! Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique pour le module os… j'ai essayé from os import environ os.path("/home/felix/Musique/ACDC/Live\ 1992") mais cela ne fonctionne pas… peux-tu m'aider ? Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique avec chdir non plus… Hors ligne pingouinux Re : [Résolu] ! Script pour noms de musique Bonsoir FelixP, Ce serait plus facile si tu donnais les messages d'erreur. C'est plutôt : import os os.chdir("/.......") Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique désolé, j'y penserai C'est bon, ça fonctionne comme ça ! Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique J'essaye de mettre plusieurs arguments dans mon affichage… print("le morceau fait %s minutes %s" % int(audio.info.length/60) % int(int(audio.info.length)-60*int(audio.info.length/60)) ) il me ressort print("le morceau fait %s minutes %s" % int(audio.info.length/60) % int(int(audio.info.length)-60*int(audio.info.length/60)) ) TypeError: not enough arguments for format string Comment puis-je ressortir plusieurs nombres ? Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Hum, je pense que c'est un peu trop compliqué… Je pensais faire juste un script bash utilisant les noms de fichiers ^^ Hors ligne pingouinux Re : [Résolu] ! Script pour noms de musique print("le morceau fait %s minutes %s"%(divmod(audio.info.length,60)) Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Ah, ça fonctionne en effet Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Savez-vous comment faire une boucle dans le dossier histoire de scanner tous les fichiers ? Hors ligne pingouinux Re : [Résolu] ! Script pour noms de musique Peux-tu montrer le script que tu as déjà écrit ? Ce sera plus facile à adapter. Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique #!/usr/bin/env python #coding: utf-8 from mutagen.mp3 import MP3 import os os.chdir("/home/felix/Musique/ACDC/Live 1992") audio = MP3("01 Thunderstruck.mp3") print("(%s:%s)"%(divmod(int(audio.info.length),60))) j'essaye d'extraire le nom de la musique avec print("%c"%audio.info.title) mais je reçois Traceback (most recent call last): File "sans titre.py", line 8, in <module> print("%c"%audio.info.title) AttributeError: 'MPEGInfo' object has no attribute 'title' Hors ligne pingouinux Re : [Résolu] ! Script pour noms de musique Je n'ai pas testé. Pour le titre, je ne sais pas pour le moment. #!/usr/bin/env python #coding: utf-8 from mutagen.mp3 import MP3 import os, glob os.chdir("/home/felix/Musique/ACDC/Live 1992") for fic in glob.glob("*.mp3") : audio=MP3(fic) print("(%s:%s)"%(divmod(int(audio.info.length),60))) Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique On peut récupérer le titre avec le nom de fichier simplement ? Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique En tout cas ceci fonctionne très bien ! Hors ligne pingouinux Re : [Résolu] ! Script pour noms de musique Essayer : print("%s"%audio['title']) Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique ça ne fonctionne pas print("%s"%audio['title']) File "/usr/lib/python2.7/site-packages/mutagen/__init__.py", line 84, in __getitem__ else: return self.tags[key] File "/usr/lib/python2.7/site-packages/mutagen/_util.py", line 108, in __getitem__ return self.__dict[key] KeyError: 'title' Hors ligne nesthib Re : [Résolu] ! Script pour noms de musique Savez-vous comment faire une boucle dans le dossier histoire de scanner tous les fichiers ? tu peux utiliser os.walk import os for path, dirs, files in os.walk('/ton/chemin'): print("Nous sommes dans %s" % path) for file in files: print("Voici le fichier %s/%s" % (path, file)) pour un fichier donné tu peux faire : audio=MP3(os.path.join(path,file)) print('Album : %s\nAnnée : %s\nTitre : %s' % (audio['TALB'], audio['TDRC'], audio['TIT2'])) Hors ligne FelixP Re : [Résolu] ! Script pour noms de musique Génial Mais alors j'essaye d'afficher le titre et la durée en même temps, print(" %s (%s:%s)" % (audio['TIT2'], divmod(int(audio.info.length),60))) Mais cela ne fonctionne pas ! (Il faut que je me mette au python, moi…) print(" %s (%s:%s)" % (audio['TIT2'], divmod(int(audio.info.length),60))) TypeError: not enough arguments for format string De plus, je veux n'afficher nom de l'album et date qu'une seule fois, alors comment puis-je dire à python d'usiliser, par exemple, le premier fichier du dossier ? Et par contre, les fichiers ne sont pas scannés dans l'ordralphabétix… XD Savez-vous comment faire ? Désolé pour tout ce que mon inculture de python vous inflige… Hors ligne
You can perform local assignments as a side effect of list comprehensions in Python 2. import sys say_hello = lambda: [ [None for message in ["Hello world"]], sys.stdout.write(message + "\n") ][-1] say_hello() However, it's not possible to use this in your example because your variable flag is in an outer scope, not the lambda's scope. This doesn't have to do with lambda, it's the general behaviour in Python 2. Python 3 lets you get around this with the nonlocal keyword inside of defs, but nonlocal can't be used inside lambdas and Python 3 removes this side effect of list comprehensions, so this isn't possible in Python 3. There's a workaround (see below), but while we're on the topic... In some cases you can use this to do everything inside of a lambda: (lambda: [ ['def' for sys in [__import__('sys')] for math in [__import__('math')] for sub in [lambda *vals: None] for fun in [lambda *vals: vals[-1]] for echo in [lambda *vals: sub( sys.stdout.write(u" ".join(map(unicode, vals)) + u"\n"))] for Cylinder in [type('Cylinder', (object,), dict( __init__ = lambda self, radius, height: sub( setattr(self, 'radius', radius), setattr(self, 'height', height)), volume = property(lambda self: fun( ['def' for top_area in [math.pi * self.radius ** 2]], self.height * top_area))))] for main in [lambda: sub( ['loop' for factor in [1, 2, 3] if sub( ['def' for my_radius, my_height in [[10 * factor, 20 * factor]] for my_cylinder in [Cylinder(my_radius, my_height)]], echo(u"A cylinder with a radius of %.1fcm and a height " u"of %.1fcm has a volume of %.1fcm³." % (my_radius, my_height, my_cylinder.volume)))])]], main()])() A cylinder with a radius of 10.0cm and a height of 20.0cm has a volume of 6283.2cm³. A cylinder with a radius of 20.0cm and a height of 40.0cm has a volume of 50265.5cm³. A cylinder with a radius of 30.0cm and a height of 60.0cm has a volume of 169646.0cm³. Please don't. ...back to your original example: though you can't perform assignments to the flag variable in the outer scope, you can use functions to modify the previously-assigned value. For example, flag could be an object whose .value we set using setattr: flag = Object(value=True) input = [Object(name=''), Object(name='fake_name'), Object(name='')] output = filter(lambda o: [ flag.value or bool(o.name), setattr(flag, 'value', flag.value and bool(o.name)) ][0], input) [Object(name=''), Object(name='fake_name')] If we wanted to fit the above theme, we could use a list comprehension instead of setattr: [None for flag.value in [bool(o.name)]] But really, in serious code you should always use a regular function definition instead of a lambda if you're going to be doing assignment. flag = Object(value=True) def not_empty_except_first(o): result = flag.value or bool(o.name) flag.value = flag.value and bool(o.name) return result input = [Object(name=""), Object(name="fake_name"), Object(name="")] output = filter(not_empty_except_first, input)
I have a plot in a tkinter.Toplevel window and I would like to have it use a custom spaced x-axis. Currently it is set as default so the numbers are evenly spaced (0,5,10,15... etc.). I would like to have a tick mark for each data point I have (example 2,6,15,30). I am sure it is just some key word under some header, but I cannot figure out which one it is. Below is an example of the code that it would be going into. import matplotlib matplotlib.use('TkAgg') from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg from matplotlib.backends.backend_tkagg import NavigationToolbar2TkAgg from matplotlib.figure import Figure import Tkinter as tk root = tk.Toplevel(width=2000) f = Figure() ax = f.add_subplot(111) zeroy = [0,25] zerox = [0, 35] p3 = ax.plot(zerox, zeroy, 'k-') canvas = FigureCanvasTkAgg(f, master=root) canvas.show() canvas.get_tk_widget().grid(row=0) toolbar = NavigationToolbar2TkAgg(canvas, root) toolbar.grid(row=1, sticky=tk.W) toolbar.update() button = tk.Button(root, text='Quit', command = root.destroy) button.grid(row=2) root.mainloop() I have tried to use ax.xticks(...) and f.add_axes(xticks=[1,2,5,7] but these didn't work. I have also tried a variety of other weak attempts without success. Any help would be appreciated. Thanks.
Consider the following code example (python 2.7): class Parent: def __init__(self, child): self.child = child def __getattr__(self, attr): print("Calling __getattr__: "+attr) if hasattr(self.child, attr): return getattr(self.child, attr) else: raise AttributeError(attr) class Child: def make_statement(self, age=10): print("I am an instance of Child with age "+str(age)) kid = Child() person = Parent(kid) kid.make_statement(5) person.make_statement(20) it can be shown, that the function call person.make_statement(20) calls the Child.make_statement function through the Parent's __getattr__ function. In the __getattr__ function I can print out the attribute, before the corresponding function in the child instance is called. So far so clear. But how is the argument of the call person.make_statement(20) passed through __getattr__? How am I able to print out the number '20' in my __getattr__ function?
tiramiseb Re : configuration des DNS S'il-te-plait, pour des citations utilise la balise "[ quote ]" et non la balise "[ code ]", c'est pénible de devoir défiler horizontalement pour lire les phrases que tu cites... Hors ligne kr2sis Re : configuration des DNS ça y est je sui perdu...:o est ce qu'on peut faire doucement stp ? - doubitchou est non révélé au publique... ok - dans mon fichier /etc/hosts, j'ai ajouté une ligne : [IP publique serveur] doubitchou.fr....ok - pas besoin de créer un fichier de zone chez Gandi...ok je suis vraiment navré seb et je ne remets pas en quetsion tes explications que j'ai réussi à suivre jusqu'à symposion.fr, mais là pour moi tout change dans le cadre d'un nouveau virtual host et je ne sais pas comment on peut enregistrer un autre site .. le dernier encart est whois oki mais n'y a-t-il pas un formulaire ou un fichier qui fasse que doubitchou réponde présent ? apres modif du fichier /etc/hosts, j'a eu ce retour de commande : PING doubitchou.fr (84.101.105.63) 56(84) bytes of data. 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=1 ttl=64 time=0.607 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=2 ttl=64 time=0.508 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=3 ttl=64 time=0.516 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=4 ttl=64 time=0.512 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=5 ttl=64 time=0.531 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=6 ttl=64 time=0.742 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=7 ttl=64 time=0.559 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=8 ttl=64 time=0.591 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=9 ttl=64 time=0.529 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=10 ttl=64 time=0.682 ms 64 bytes from doubitchou.fr (84.101.105.63): icmp_req=11 ttl=64 time=0.598 ms ^C --- doubitchou.fr ping statistics --- 11 packets transmitted, 11 received, 0% packet loss, time 9998ms rtt min/avg/max/mdev = 0.508/0.579/0.742/0.076 ms Hors ligne tiramiseb Re : configuration des DNS Tu as "acheté" symposion.fr auprès d'un registrar, Gandi en l'occurrence. Si tu veux que doubitchou.fr soit disponible pour tout le monde sur Internet, tu dois "acheter" doubitchou.fr de la même manière. Si tu ne veux desservir doubitchou.fr que pour toi, pour faire des tests par exemple, tu peux : - soit te contenter de modifier le fichier /etc/hosts de ton poste client afin de lui dire de pointer vers l'adresse de ton serveur ; - soit mettre en place un serveur DNS local, que ton poste client interrogerait pour obtenir les résolutions DNS qu'il demande. Hors ligne tiramiseb Re : configuration des DNS Tu as tout fait avec succès pour symposion.fr, je ne vois pas ce que tu ne comprends pas quand on remplace symposion.fr par doubitchou.fr... Le fonctionnement est le même. Hors ligne kr2sis Re : configuration des DNS no bah du coup, j'avai compris alors... mais pourquoi je lis partotu qu'il est possible d'avoir plusieurs sites sur un meme serveur et qu'on ne parle pas d'achat de nom de domaine ? je me suis pris la tete comme un barge pour rien puisque comme tu dis j'avais tout fait et le succes je te le laisse car sans toi, j'aurais sombré dans une mélancolie douce et stupide, voire une dépression et tentative de suicide avec horizo et chocolat donc je ne m'obstine plus a chercher comment aoir un autre nom de domaine, je l'achete ou je passe en sous-domaine...c'est ça ? en totu cas un grand mercipour toute l'aide qur tu as pu m'apporter Hors ligne Brunod Re : configuration des DNS ..., je l'achete ou je passe en sous-domaine...c'est ça ? ... Ou tu le fais passer gratuitement par noip, dyndns etc en prenant un suffixe de chez eux. Wanted : >>> un emploi dans la sécurité informatique et réseau <<< Windows est un système d'exploitation de l'homme par l'ordinateur. Linux, c'est le contraire ... --> état de la conversion : 36 pc linux Hors ligne kr2sis Re : configuration des DNS beh ca m'interesse pas vraiment étant donné que le but pour moi d'avoir créé un serveur web, c'etait d'etre en total autonomie et liberté.... donc les nom à rallonge, perso je trouve ca laid, donc j'aviserai quand je pourrait accéder a ce site déjà Hors ligne tiramiseb Re : configuration des DNS mais pourquoi je lis partotu qu'il est possible d'avoir plusieurs sites sur un meme serveur et qu'on ne parle pas d'achat de nom de domaine ? L'achat et la configuration de noms de domaine, c'est une autre question, ce n'est pas nécessairement lié, c'est pourquoi les documentations n'en traitent pas. je l'achete ou je passe en sous-domaine...c'est ça ? Exact. Hors ligne kr2sis Re : configuration des DNS bon, nom de domaine acheté le 26 février et toujours pas visible en public... il se passe quelque chose qu eje ne coprends pas... Hors ligne tiramiseb Re : configuration des DNS Tu parles du domaine symposion.fr ? Chez moi, ce domaine se résoud correctement de la manière suivante : $ host symposion.frsymposion.fr has address 84.101.105.63symposion.fr mail is handled by 50 fb.mail.gandi.net.symposion.fr mail is handled by 10 spool.mail.gandi.net. Par contre, il ne semble pas y avoir de sous-domaine "www" dans ce domaine : $ host www.symposion.fr Host www.symposion.fr not found: 3(NXDOMAIN) Par ailleurs, l'adresse pointé par "symposion.fr" (84.101.105.63, qui est une machine chez SFR (reverse DNS "63.105.101.84.rev.sfr.net")) ne semble desservir aucun service (j'ai testé les connexions suivantes sur cette adresse : HTTP, HTTPS, SSH, SMTP, IMAP, POP). Hors ligne kr2sis Re : configuration des DNS le truc c'est que le logiciel (DUC) de no-ip m'indique cet ip :86.71.56.58 et c'est l'autre ip que j'ai noté dans la feuille de zone chez moi rien ne passe (via le local et l'externe (tel portable orange) et je ne comprends pas ce que tu veux dire dans la dernière partie de ton dernier post ... cela signifie quoi stp ? et u'est ce que je dois faire pour résoudre le probleme si il y a un probleme... merci losque je fais ifconfig ou ipconfig, j'obtiens l'ip 192.168.1.54 ; adresse que j'ai statifié dans la box. est ce que je peux mettre cet ip à la place de celle que me révèle no-ip ?? je ne comprends plus rien avec ces ip et le role de no-ip... des dynDNS ok ! mais si ca change tout le temps, il faut que je change mes config tout le temps....:mad: pas cool Dernière modification par kr2sis (Le 01/03/2013, à 13:39) Hors ligne tiramiseb Re : configuration des DNS Reprenons du début : 1/ où se trouve, physiquement, ton serveur ? 2/ comment as-tu configuré les entrées DNS dans l'interface de Gandi ? Hors ligne kr2sis Re : configuration des DNS le serveur est chez moi la config Gandi, je vais voir ca de suite j'ai touché à 2 ligne @ A 84.101.105.63 www CNAME share-wood.zapto.org share..... et l'ip sont issue de mon inscription no-ip Dernière modification par kr2sis (Le 01/03/2013, à 13:44) Hors ligne tiramiseb Re : configuration des DNS le serveur est chez moi Tu es chez quel opérateur ? Avec les adresses que tu donnes, il semble que tu es chez SFR, probablement avec une Neufbox, c'est ça ? Hors ligne kr2sis Re : configuration des DNS oui je suis chez SFR chez moi et neufbox NB4 Hors ligne tiramiseb Re : configuration des DNS Donc tu as un problème. Une adresse IP chez SFR n'est pas fixe : SFR ne fournit que des adresses IP dynamiques. En pratique il y a de fortes chances que ton adresse IP ne change pas souvent, mais ça peut arriver ; c'est peut-être pour ça qu'actuellemnt tu as des problèmes. Dans ces cas, on préconise un système de type DynDNS. Heureusement, des gars voulaient quelque chose avec Gandi et Gandi propose des fonctionnalités avancées intéressantes, en l'occurrence une API qui permet de faire reconfigurer ton entrée DNS automatiquement. Tu pourras donc trouver à l'adresse suivante un outil permettant de gérer ton DNS de manière dynamique avec Gandi : https://github.com/Chralu/gandyn Par contre je ne pourrai pas t'aider dans la mise en œuvre de cet outil, je ne le connais pas. Hors ligne kr2sis Re : configuration des DNS et no-ip dans tout ca ? Hors ligne tiramiseb Re : configuration des DNS Il semble que ton truc chez No-IP est aussi une solution (plus tarabiscotée) pour ce problème. Je ne connais pas ce service. Dans ce cas, ton problème est peut-être plutôt là : as-tu bien activé ta dernière configuration dans l'interface de Gandi ? Dernière modification par tiramiseb (Le 01/03/2013, à 13:51) Hors ligne tiramiseb Re : configuration des DNS Par ailleurs, entre ces deux solutions je préconiserais fortement d'utiliser Gandyn, qui est plus direct que No-IP car il travaille directement avec les serveurs de Gandi. Hors ligne kr2sis Re : configuration des DNS oki donc faut que je vois avec le service client qui ne me repond pas... je vais essayé de leur telephoner dans ce cas.. donc ce que tu dis pour sfr est valable pour tous ceux qui sont abos chez eux ?? bon à savoir si on me demande.. merci à toi le reste de ma config, je n'y touche pas alors ? (mis à part le fichier de zone une fois le tout installé correctement) Hors ligne tiramiseb Re : configuration des DNS oki donc faut que je vois avec le service client qui ne me repond pas... je vais essayé de leur telephoner dans ce cas.. Tu parles de Gandi ? Si tu as des difficultés à utiliser leurs outils, oui il faut leur demander... Cependant : Tu n'as pas répondu à ma question, tu ne m'as pas dit si tu as bien activé la dernière version de la configuration de tes DNS... donc ce que tu dis pour sfr est valable pour tous ceux qui sont abos chez eux ?? bon à savoir si on me demande.. C'est valable pour tous ceux qui sont chez SFR et chez n'importe quel autre FAI qui n'assure pas des adresses IP fixes. Ceux qui fournissent des IP fixes à ma connaissance en France sont (la liste n'est certainement pas exhaustive) : - Free Telecom - Bouygues - Orange Pro, sur demande - Nerim - OVH Ceux qui ne fournissent PAS d'IP fixe (la liste n'est pas exhaustive) : - Orange (sauf pro) - SFR - Numericable - certainement plein d'autres encore le reste de ma config, je n'y touche pas alors ? (mis à part le fichier de zone une fois le tout installé correctement) Dans la mesure où http://86.71.56.58/ semble bien marcher, en effet ta config je ne pense pas qu'il faille y toucher. Dernière modification par tiramiseb (Le 01/03/2013, à 14:01) Hors ligne kr2sis Re : configuration des DNS Tu n'as pas répondu à ma question, tu ne m'as pas dit si tu as bien activé la dernière version de la configuration de tes DNS... tu veux dire entrer la nouvelle IP ? je vais le faire de ce pas dis-moi question surement bete : ne serait-il pas judicieux de ma part de ne pas signaler d'ip (vu qu'elle change) mais plutot seulement le nom de domaine fourni par no-ip (qui corrrespond à chaque ip dynamique dans mon fichier de zone ?? Dernière modification par kr2sis (Le 01/03/2013, à 14:14) Hors ligne tiramiseb Re : configuration des DNS Tu n'as pas répondu à ma question, tu ne m'as pas dit si tu as bien activé la dernière version de la configuration de tes DNS... tu veux dire entrer la nouvelle IP ? je vais le faire de ce pas Je veux dire : as-tu bien cliqué sur le bouton "Activer" dans l'écran de configuration ? Et as-tu bien attendu 48h après avoir cliqué sur ce bouton ? ne serait-il pas judicieux de ma part de ne pas signaler d'ip (vu qu'elle change) mais plutot seulement le nom de domaine fourni par no-ip (qui corrrespond à chaque ip dynamique dans mon fichier de zone ?? Oui, dans le cas du service fourni par no-IP c'est ce qu'il faut faire. Si tu utilises Gandyn, c'est l'outil qui va lui-même automatiquement configurer Gandi à chaque changement d'adresse IP. Hors ligne kr2sis Re : configuration des DNS Je veux dire : as-tu bien cliqué sur le bouton "Activer" dans l'écran de configuration ? dans l'interface, c'est la ligne qui dit "Numéro de version active" ? si oui c'est actif oui (dsl je suis perdu dans cet interface ) Hors ligne kr2sis Re : configuration des DNS je ne vois plus le site de l'exterieur .. tu le vois toi ? stp Hors ligne
import os import sys import time import base64 import hmac import mimetypes import urllib2 from hashlib import sha1 from poster.streaminghttp import register_openers def read_data(file_object): while True: r = file_object.read(1 * 1024) print 'rrr',r if not r: print 'r' file_object.close() break yield r def upload_file(filename, bucket): print 'start' length = os.stat(filename).st_size content_type = mimetypes.guess_type(filename)[0] date = time.strftime("%a, %d %b %Y %X GMT", time.gmtime()) print 'before' register_openers() print 'after' input_file = open(filename, 'r') print 'read mode' data = read_data(input_file) request = urllib2.Request(bucket, data=data) request.add_header('Date', date) request.add_header('Content-Type', content_type) request.add_header('Content-Length', length) request.get_method = lambda: 'PUT' print 'before lamda' urllib2.urlopen(request).read() upload_file('C:\\test.pdf', "http://10.105.158.132:26938/DocLib1/ste.pdf") the above code is for streaming and uploading the data. streaming is performing fine. While uploading, code hangs in following code urllib2.urlopen(request).read()
luc765 Re : Synthèse vocale SVOX Pico Merci à frafra et tuxmuraille, pour votre travail. J'utilisais espeak avec mbrola, la qualité de SVOX est nettement meilleure J'ai installé gSpeech sur Lucid via la compilation pour SVOX. Tout est parfait. Dans l'utilisation est-il possible de modifier la vitesse de lecture par un paramètre ?? Hors ligne frafa Re : Synthèse vocale SVOX Pico Bonjour, pas prévu dans SVOX,.. pico2wave -? Usage: pico2wave <words> -w, --wave=filename Write output to this WAV file -l, --lang=lang Language (default: "en-US") Help options: -?, --help Show this help message --usage Display brief usage message Hors ligne frafa Re : Synthèse vocale SVOX Pico si tu est courrageux il traine un patch ... http://www.rockbox.org/tracker/11803 pas testé de mon coté Hors ligne luc765 Re : Synthèse vocale SVOX Pico Bonsoir tuxmuraille et Frafa, Je rentre dans le détail mais le dictionnaire phonétique inclus dans gSpeech n'est pas pris en compte chez moi. Y-a-t'il une astuce qui m'échappe ?? luc765 Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonjour luc765, Tu peux télécharger ici. Le dictionnaire phonétique doit être mis dans le dossier ~/.config/gSpeech avec le fichier de configuration gspeech.conf. Ce fichier a deux options. Par défaut gSpeech utilise la langue de l'environnement graphique pour la synthèse, et l'AppIndicator (la nouvelle zone de notification de Ubuntu) si elle est installée dans l'environnement graphique. - useappindicator = 1 ne change rien au comportement par défaut, 0 force l'usage de la zone de notification - defaultlanguage = force l'usage d'une langue de synthèse: de-de, fr-FR, etc... Merci pour tes retours. Par d Hors ligne luc765 Re : Synthèse vocale SVOX Pico Merci tuxmuraille et Frafa pour le suivi. Concernant le dictionnaire phonétique complémentaire inclus dans gSpeech je ne suis pas parvenu à activer sa prise en compte. Ceci est probablement lié au fait que je suis un petit belge (sans gouvernement) et donc la langue choisie pour le système Ubunru est fr-BE (echo $LANG donne fr_BE.utf8). fr_BE. n'étant pas pris en compte dans le script gSpeech.py j'ai changé, peut-être un peu brutalement, la ligne 401 du script.py de DefaultLang = "en-US" en DefaultLang = "fr-FR" A part cela tout est nickel. Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonsoir, gSpeech ne prend pas en compte fr-BE, car le synthétiseur SVOX Pico ne le supporte pas. Je pense ajouter le support pour d'autre synthétiseur vocaux mais pas pour le moment. Normalement gSpeech doit se caler sur en-US en l’absence de fr-BE, ce que tu doit faire c'est configurer l'option "defaultlanguage = fr-FR" dans le fichier ~/.config/gSpeech/gSpeech.conf et mettre ton dictionnaire dans le même dossier. Hors ligne ssm2017 Re : Synthèse vocale SVOX Pico salut merci pour toutes ces infos. le script fonctionne mais je n'arrive pas à trouver comment changer de la voix feminine vers la masculine. une idee ? Hors ligne cataclop Re : Synthèse vocale SVOX Pico Bonjour, Je commence par remercier pour les explications données dans ce fuseau (qui m'ont été fort utiles) et par m'excuser pour cet abominable "remontage". Je me suis cependant heurté à une limitation de pico2wave (apparemment) : il semble ne pas admettre de texte de plus de 2¹⁵ caractères (soit 32768). C'est beaucoup pour des synthèses ponctuelles, mais outrepasser cette limitation me permettrait d'enregistrer des textes longs pour les écouter sans ordinateur. Au-delà de cette limite, j'obtiens le message suivant : Cannot put Text (-102): invalid argument supplied Merci d'avance à ceux qui pourraient m'indiquer une solution. HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne KER747 Re : Synthèse vocale SVOX Pico Bonjour, Je cherche un synthétiseur vocal qui marche bien avec une voix comparable à celle d'accapella: http://www.acapela-group.fr/text-to-spe … -demo.html J'ai essayé gspeaker mais la voix n'est pas terrible. Je pense à essayer pico svox mais l'intallation semble compliqué et ce topic date apparemment. Comment l'installer et est-ce le bon à installer? Ma première page: www.aerozone.tk Hors ligne cataclop Re : Synthèse vocale SVOX Pico Si quelqu'un se heurtait au même problème que moi : j'ai trouvé un contournement en modifiant gSpeech.py. Voici le patch (il faut, pour que cela fonctionne, installer sox) : --- gSpeech2.py 2013-02-26 16:17:57.633682124 +0100 +++ gSpeech.py 2013-02-26 17:13:35.469609167 +0100 @@ -248,14 +248,14 @@ pynotify.Notification(APPNAME, _(u"I'm reading the text. One moment please."), self.icon).show() if widget.get_label() == _(u"From X.org") : - str = gtk.clipboard_get(selection="PRIMARY").wait_for_text() + texte = gtk.clipboard_get(selection="PRIMARY").wait_for_text() else : - str = gtk.clipboard_get(selection="CLIPBOARD").wait_for_text() + texte = gtk.clipboard_get(selection="CLIPBOARD").wait_for_text() #~ str = str.lower() - str = str.replace('\"', '') - str = str.replace('`', '') - str = str.replace('´', '') + texte = texte.replace('\"', '') + texte = texte.replace('`', '') + texte = texte.replace('´', '') dic = CONFIGDIR + '/' + self.lang + '.dic' if os.path.exists(dic) : @@ -266,10 +266,16 @@ good = line.split('=')[1] except : good = ' ' - str = str.replace(bad, good) - - os.system('pico2wave -l %s -w %s \"%s\" ' % ( self.lang, SPEECH, str )) - + texte = texte.replace(bad, good) + discours = texte.split('\n') + i = 0 + fichiers = '' + for texte in discours: + i += 1 + fichier = CACHEFOLDER + 'speech' + str(i) + '.wav' + fichiers = fichiers + ' ' + fichier + os.system('pico2wave -l %s -w %s \"%s\" ' % ( self.lang, fichier, texte )) + os.system('sox %s %s' % ( fichiers, SPEECH )) player = self.onPlayer(SPEECH) self.player.set_state(gst.STATE_PLAYING) Modification : j'avais fait le diff à l'envers… C'est corrigé ! Autre modification : le patch du post #65 est plus efficace, annule et remplace celui-ci. Dernière modification par cataclop (Le 28/02/2013, à 23:34) HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonjour, Je cherche un synthétiseur vocal qui marche bien avec une voix comparable à celle d'accapella: http://www.acapela-group.fr/text-to-spe … -demo.html J'ai essayé gspeaker mais la voix n'est pas terrible. Je pense à essayer pico svox mais l'intallation semble compliqué et ce topic date apparemment. Comment l'installer et est-ce le bon à installer? A ma connaissance il n'existe pas de synthétiseur vocale libre équivalent aux synthétiseur commerciaux, en particulier acapela. Pico Svox est le synthétiseur de Android, bien qu'il y est le logiciel pico2wave pour réaliser les conversions, c'est avant un bibliothèque partagée à intégrer dans un programme. A part Pico il existe mbrola, encore plus compliqué à installé qui permet d'avoir des voix proches de celles de acapela. Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonjour cataclop, 1) Tu as exécuté la commande patch à l'envers. 2) gSpeech n'est pas prévue pour des textes longs, tu devrais développer ton propre programme. 3) La méthode que tu utilise n'est à mon humble avis pas bonne. Il est peu efficace de découpé ton texte ligne par ligne. 4) En revanche merci, parce que c'est complètement idiot d'avoir appelé ma variable str. Hors ligne cataclop Re : Synthèse vocale SVOX Pico Bonjour Tuxmouraille, 1) Corrigé dans ma dernière édition (c'est tout moi, ce genre d'erreur…) 2) Sans doute, mais ceci dépasse largement mes compétences en python ! Disons qu'au moins, comme ça, j'arrive à un résultat. 3) Tout-à-fait d'accord ; l'avantage de découper par ligne est que je suis (à peu près) sûr de ne pas découper une phrase ou un mot bêtement. J'aurais peut-être dû découper phrase par phrase (en utilisant '.' au lieu de '\n'), puis "rattacher" mes phrases jusqu'à obtenir une chaîne proche des 32768 caractères. 4) Je t'en prie : je l'ai d'abord fait pour essayer d'y comprendre quelque chose… HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne cataclop Re : Synthèse vocale SVOX Pico Je n'ai pas pu m'empêcher d'expérimenter l'idée 3). De la sorte, je passe d'environ 20 secondes de traitement à environ 15 secondes pour un texte de 36128 caractères. Voici donc le nouveau patch : --- gSpeech.old 2013-02-26 16:17:57.633682124 +0100 +++ gSpeech.py 2013-02-28 11:09:14.737670234 +0100 @@ -248,14 +248,14 @@ pynotify.Notification(APPNAME, _(u"I'm reading the text. One moment please."), self.icon).show() if widget.get_label() == _(u"From X.org") : - str = gtk.clipboard_get(selection="PRIMARY").wait_for_text() + texte = gtk.clipboard_get(selection="PRIMARY").wait_for_text() else : - str = gtk.clipboard_get(selection="CLIPBOARD").wait_for_text() + texte = gtk.clipboard_get(selection="CLIPBOARD").wait_for_text() #~ str = str.lower() - str = str.replace('\"', '') - str = str.replace('`', '') - str = str.replace('´', '') + texte = texte.replace('\"', '') + texte = texte.replace('`', '') + texte = texte.replace('´', '') dic = CONFIGDIR + '/' + self.lang + '.dic' if os.path.exists(dic) : @@ -266,12 +266,25 @@ good = line.split('=')[1] except : good = ' ' - str = str.replace(bad, good) - - os.system('pico2wave -l %s -w %s \"%s\" ' % ( self.lang, SPEECH, str )) - + texte = texte.replace(bad, good) + discours = texte.split('.') + i = 0 + fichiers = [] + texte = '' + while i < len(discours): + if len(texte) < 30000: + texte += discours[i] + '. ' + if len(texte) > 30000 or i == len(discours) - 1: + fichier = CACHEFOLDER + 'speech' + str(i) + '.wav' + fichiers.append(fichier) + os.system('pico2wave -l %s -w %s \"%s\" ' % ( self.lang, fichier, texte )) + texte = '' + i += 1 + os.system('sox %s %s' % ( ' '.join(fichiers), SPEECH )) player = self.onPlayer(SPEECH) self.player.set_state(gst.STATE_PLAYING) + for fichier in fichiers: + os.remove(fichier) # player fonction def onPlayer(self,file): Modification : Petit changement pour nettoyer tous les fichiers temporaires que ma méthode avait rajoutés, et pour restaurer la possibilité de lire des textes courts… Dernière modification par cataclop (Le 01/03/2013, à 12:13) HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne cataclop Re : Synthèse vocale SVOX Pico Juste pour donner un ordre d'idée : j'ai fait le test avec un texte de 178573 mots (791222 caractères sans compter les espaces) : − il a fallu (sur ma machine, en n'utilisant qu'un seul processeur puisque je ne sais pas utiliser le multitâche en python) environ 10 minutes de traitement avant que la lecture ne commence ; − 35 wav intermédiaires d'environ 50 Mo chacun ont été produits ; − le wav final pèse environ 1,75 Go pour 15h18 de lecture. Bref, la performance de pico2wave couplé avec gSpeech me semble tout de même honnête ! Un grand merci à frafra et à Tuxmouraille pour tout ce travail ! HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne cataclop Re : Synthèse vocale SVOX Pico Une petite question, au cas où : existe-t-il une voix française masculine pour pico2wave ? HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonjour cataclop, Je ne suis pas non plus développeur. gSpeech est un de mes projets d'apprentissage Python. Il existe beaucoup de documentation sur Python et PyGTK. Si tu as déjà programmé tu te rendras compte que Python est assez simple. Si tu veux apprendre sérieusement il y a le siteduzero et l'excellente "Apprendre à programmer avec Python". A ma connaissance il n'existe pas de voix masculine pour Pico. Regardes du côté de mbrola, je croit que ce projet est à l'origine du synthétiseur vocale de acapela. Je pense que tu peux créer un script bash: 1) tu découpe le texte en textes plus courts 2) dans une boucle 3) tu stokes le contenue de chaque nouveaux fichiers dans une variable 4) tu utilise cette variable pour généré le fichier wave 5) hors de la boucle 6) tu concatènes les wave 7) optionnel: tu convertis en flac, ogg ou mp3 pour gagner de la place. Si tu arrives à faire une découpage par chapitre tu peux aussi conserver un wave par chapitre. A condition que chaque chapitre fasse moins de 2¹⁵ caractères. Quel est le format source de ton texte? Dernière modification par Tuxmouraille (Le 01/03/2013, à 18:37) Hors ligne cataclop Re : Synthèse vocale SVOX Pico Bonsoir Tuxmouraille, Bien sûr que je pourrais faire tout ça, mais… pourquoi réinventer la roue ? Je ne suis pas informaticien, si ce n'est à mes heures perdues. Puisqu'une modification de 20 lignes m'amène au même résultat, je ne vais pas me lancer dans des dizaines de lignes de bash ou de python pour faire en définitive moins bien ! GSpeech s'intègre très bien dans l'interface, et la dernière étape que tu me suggères est facile à intégrer dans le clic droit de Nautilus grâce aux scripts (paquet nautilus-script-audio-convert). Comme ton logiciel est sous Gpl, j'en ai profité pour l'adapter à mon besoin. Conformément à l'esprit du libre et à la lettre de la licence, je mets à disposition mon travail pour les autres ; s'ils en profitent, tant mieux, j'en serai content. Sinon, eh bien ! je n'aurai rien perdu. Merci encore pour ton travail et ta disponibilité. Dernière modification par cataclop (Le 01/03/2013, à 22:11) HP Pavilion dv6-6090ef - Core i7 I7-2630QM 2 GHz - 6 Go RAM Ubuntu Quantal Hors ligne KER747 Re : Synthèse vocale SVOX Pico Ok, Merci Ma première page: www.aerozone.tk Hors ligne ian57 Re : Synthèse vocale SVOX Pico @tuxmouraille : merci pour ton travail, la version 0.4.5 est vraiment super bien. Juste une remarque : il me manque une raccourcis clavier pour la lecture. Je vais jeter un oeil à tes sources pour voir si je peux le faire ;-). encore super merci pour ce programme voila avec SHIFT-l on peut lire le texte selectionné dans l'interface graphique (et SHIFT-m pour le presse papier) ( ne marche que si la fenêtre multimédia est visible et possède le focus : le code modifié est le suivant (http://www.pygtk.org/pygtk2tutorial/sec … ators.html) : à partir de la ligne 325 # Create an accelerator group self.accelgroup = gtk.AccelGroup() # Add the accelerator group to the toplevel window self.window.add_accel_group(self.accelgroup) #~ button = gtk.Button(stock = gtk.STOCK_EXECUTE) #~ Label=button.get_children()[0] #~ Label=Label.get_children()[0].get_children()[1] #~ Label=Label.set_label(_(u"From ClipBoard")) #~ button.connect("clicked", gSpeech.onExecute) button = gtk.Button(_(u"From ClipBoard")) button.connect("clicked", gSpeech.onExecute) button.add_accelerator("clicked",self.accelgroup , ord('m'), gtk.gdk.SHIFT_MASK, gtk.ACCEL_VISIBLE) self.window.vbox.pack_start(button, expand=False, fill=False) #~ button = gtk.Button(stock = gtk.STOCK_EXECUTE) #~ Label=button.get_children()[0] #~ Label=Label.get_children()[0].get_children()[1] #~ Label=Label.set_label(_(u"From X.org")) button = gtk.Button(_(u"From X.org")) button.connect("clicked", gSpeech.onExecute) button.add_accelerator("clicked",self.accelgroup , ord('l'), gtk.gdk.SHIFT_MASK, gtk.ACCEL_VISIBLE) self.window.vbox.pack_start(button, expand=False, fill=False) Faudrait trouver un truc pour que le raccourcis soit directement dirigé vers ton appli, pour éviter de faire le focus : Peut être un début de solution avec https://code.google.com/p/autokey/ Dernière modification par ian57 (Le 14/03/2013, à 18:14) Ouvrir c'est pourrir un pneu... Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonsoir, @cataclop, et ceux qui veulent si coller, cf les modifs de cataclop: Si vous arrivez à modifier le code afin qu'il qui ne génère des fichiers wav que pour des portions de textes comprenant des paragraphes complets avec un nombres de caractères le plus proche de 2¹⁵, j'ajouterai le patch. @ian57: je te remercie pour le code. Je voulais le faire mais je n'ai jamais pris le temps de m'y mettre. Pour le problème de focus tu devrais regarder du côté du code de Terra Terminal. N.B.: je n'ai pas beaucoup de temps en ce moment, j’ai une connexion Internet toute pourrie, en HotSpot. Je serai quand même ravis de m'y remettre, dans les limites de mes possibilités. Dernière modification par Tuxmouraille (Le 14/03/2013, à 22:17) Hors ligne temps Re : Synthèse vocale SVOX Pico Bonjour, J'ai créé une base de données des différents sons phonétiques API, chacun des son ne pèse que quelques kilo, pensez-vous qu'il est possible de remplacer votre base de son par la mienne ? En d'autres mots, de remplacer vos fichiers audio de base par les miens dans votre projet ? Cordialement Parce que l'USB bootable est le support des systèmes experts, Parce que l'USB bootable contient sa propre image au démarrage. L'USB bootable permet de créer un monde à la dimension de son imagination Hors ligne ian57 Re : Synthèse vocale SVOX Pico Bonsoir, Pour le problème de focus tu devrais regarder du côté du code de Terra Terminal. Yop effectivement ya un truc pas mal intéressant http://bazaar.launchpad.net/~ozcanesen/ … ad:/terra/ le rep globalhotkeys... je vais regarder comment cela marche pour essayer de l'intégrer. Ouvrir c'est pourrir un pneu... Hors ligne Tuxmouraille Re : Synthèse vocale SVOX Pico Bonjour, J'ai créé une base de données des différents sons phonétiques API, chacun des son ne pèse que quelques kilo, pensez-vous qu'il est possible de remplacer votre base de son par la mienne ? En d'autres mots, de remplacer vos fichiers audio de base par les miens dans votre projet ? Cordialement Bonjour, gSpeech ne génère pas les fichiers de son, il utilise l'application pico2wave pour ça. Je ne pense donc pas qu'il soit possible d'utiliser ta base. Hors ligne
I need to create models of existing previously unknown tables. Others have had said to You can look at inspectdb code, and instead of outputting code return classes. but for a python newbie I am having difficulties. Could anyone offer a more concrete example on how to do this? Many thanks. UPDATE2: I can now create a model of a previously unknown mysql table at runtime using the below functions. When I try to query the model though, for a table called 'table1' I get "Table 'djangodb.subscription_table1' doesn't exist"). When I rename 'table1' to 'subscription_table1' I get this error: "Table 'djangodb.subscription_subscription_tb2f1e1e47d73417ab2b187bc4a08bf57' doesn't exist"). HELP! FINAL UPDATE: I went with another solution for my particular problem: Django - List of Dictionaries to Tables2 def getModel(table_name): myColumns=getColumns(table_name) attrs = {} attrs['__module__'] = 'subscription.models' for x,y in myColumns.items(): fieldType = y["type"] if x == 'id': '' elif fieldType == "char": attrs[x]=models.CharField(max_length='length') elif fieldType == "float": attrs[x]=models.FloatField(max_length='length') elif fieldType == "int": attrs[x]=models.IntegerField() elif fieldType == "text": attrs[x]=models.TextField() else: print "AW PROB in exptDB---------------",x,y["type"],y["length"] myModel = type(str(table_name), (models.Model,), attrs) return myModel def getColumns(expt_id): cursor = connection.cursor() cursor.execute("desc %s;" % (expt_id)) exptInfo = str(cursor.fetchall())[1:-1] myList= exptInfo.split("""(u'""") myColumns = {} for item in myList: mySplit = item.replace("u'","").replace("'","").replace(" ","").split(",") if len(mySplit)>=2: subSplit=mySplit[1].split("(") if len(subSplit)>=1: subSplit.append('999')#does not latter. myColumns[mySplit[0]]={"type":subSplit[0],"length":subSplit[1].replace(")","")} return myColumns
Experimenting with Google App Engine Google App Engine was actually the last project I worked on before I left Google. I was the PM of the project when it started, but has grown quite a bit since I left Google last June, and now it is has many more engineers and a handful of extremely talented PMs. I was fortunate enough to be able to see Kevin Gibbs and crew at Campfire One yesterday, and I could barely sit through the whole talk I was so excited to play around with the system. I have been "meaning to" start a blog for months. Blog software is extremely simple to implement, so I figured it would be a great app to test out on the new App Engine infrastructure. This blog runs on the code I wrote this evening. The lack of SQL is actually refreshing. Like Django and many other frameworks, you declare your data types in Python: class Entry(db.Model): author = db.UserProperty() title = db.StringProperty(required=True) slug = db.StringProperty(required=True) body = db.TextProperty(required=True) published = db.DateTimeProperty(auto_now_add=True) updated = db.DateTimeProperty(auto_now=True) I used a web framework we use at FriendFeed. It looks a lot like the webapp framework that ships with App Engine and web.py (which inspired both of them). It took virtually no effort to get it to work in App Engine thanks to App Engine's support for WSGI. Running the application looks a lot like the App Engine examples: application = web.WSGIApplication([ (r"/", MainPageHandler), (r"/index", IndexHandler), (r"/feed", FeedHandler), (r"/entry/([^/]+)", EntryHandler), ]) wsgiref.handlers.CGIHandler().run(application) Generating the front page is totally easy: class MainPageHandler(web.RequestHandler): def get(self): entries = db.Query(Entry).order('-published').fetch(limit=5) self.render("main.html", entries=entries) Generating the Atom feed is equally easy: class FeedHandler(web.RequestHandler): def get(self): entries = db.Query(Entry).order('-published').fetch(limit=10) self.set_header("Content-Type", "application/atom+xml") self.render("atom.xml", entries=entries) I wanted to use slugs in my URLs to entries to make them friendlier, so I had to do a query to lookup entries for entry URLs: class EntryHandler(web.RequestHandler): def get(self, slug): entry = db.Query(Entry).filter("slug =", slug).get() if not entry: raise web.HTTPError(404) self.render("entry.html", entry=entry) I also needed security for adding/editing blog entries. App Engine lets you use Google's account system, which is nice for small apps like this. Likewise, it knows which users are "admins" for the app, so I decided to use this built-in role to handle security for the blog: only admins can add/edit entries. First, I wrote a decorator that will automatically add admin security to any RequestHandler method (redirecting to the login page if the user is not logged in): def administrator(method): @functools.wraps(method) def wrapper(self, *args, **kwargs): user = users.get_current_user() if not user: if self.request.method == "GET": self.redirect(users.create_login_url(self.request.uri)) return raise web.HTTPError(403) elif not users.is_current_user_admin(): raise web.HTTPError(403) else: return method(self, *args, **kwargs) return wrapper My edit handler looks like this: class NewEntryHandler(web.RequestHandler): @administrator def get(self): self.render("new.html") @administrator def post(self): entry = Entry( author=users.get_current_user(), title=self.get_argument("title"), slug=self.get_argument("slug"), body=self.get_argument("body"), ) entry.put(entry) self.redirect("/entries/" + entry.slug) I don't think this blog will ever get millions of page views, but it is pretty cool that it could in theory :) I didn't have to configure anything. I didn't need to make an account system to make an administrative section of the site. And the entire blog is less than 100 lines of code. I deployed by running a script, and I was done. No machines, no "apt-get install", no "sudo /etc/init.d/whatever restart", nothing. I am impressed. The App Engine team has done a fantastic job, and I think they have already changed the way I do hobby projects. The next logical question is: would I run a real business on infrastructure that is so different than everyone else's? If I change my mind about App Engine, what are my options? I am hoping a number of open source projects spring up as alternatives to lower the switching costs over the next year. I will be very interested to see how many startups take the leap and run on App Engine entirely in the meantime.
'{:%m}'.format(datetime.datetime.now()) seems to work. Of course, you could take the less direct approach: '{:02d}'.format(datetime.datetime.now().month) Or you could use old style string interpolation: '%02d' % (datetime.datetime.now().month) but I like the first one because I think it's cool ... Finally, I don't see what is wrong with .strftime ... even though you said you tried it, it works just fine for me: datetime.datetime.now().strftime('%m') What it seems like you're looking for is a special subclass of int which knows how to "represent" itself when it printed: class MyInt(int): def __str__(self): return '%02d' % self a = MyInt(3) print (a) However, I would definitely not recommend using this approach, instead I would recommend using string formatting or string interpolation as I've done above on the integer objects when you need to represent the integers as strings.
TERMS & CONDITION Alfamart official partner merchandise FIFA piala dunia Brazil 2014 Peraturan Lomba : Lomba dimulai tanggal 16 Januari - 16 April 2014 Penutupan Pendaftaran tanggal 16 April 2014 Pukul 17:00 WIB Pemenang akan diumumkan pada tanggal 5 Mei 2014 Penyerahan Hadiah 12 Mei 2014 Peserta Kompetisi diwajibkan Like Facebook Alfamart Sahabat Indonesia (www.facebook.com/alfamartku) dan Twitter @alfamartku (www. twitter.com/alfamartku) Lomba ini tidak dipungut biaya sama sekali Para peserta warga negara Indonesia dan menetap di Indonesia Satu orang hanya dapat memperoleh satu hadiah. Peserta boleh mendaftar beberapa domain, dengan syarat Nama, Email, dan nomer HP harus sama. Nama berbeda tapi orangnya sama akan di diskualifikasi Judul kata kunci yang digunakan "<a href="http://acnefreeforever8888.blogspot.com/2014/03/alfamart.html">Alfamart official partner merchandise FIFA piala dunia Brazil 2014</a>" Peserta diwajibkan mencantumkan domain http://www.alfamartku.com di blogroll atau pada halaman webpage yang dilombakan Domain/Blog minimum telah berumur 3 bulan terhitung 16 Januari 2014, namun halaman/URL entry yang disubmit harus benar-benar baru tanpa back link dan versi cache sebelumnya Setiap domain yang terdaftar hanya boleh memuat satu artikel saja dan dilarang menggunakan domain/blog berakhiran .cc dan .tk Tidak menggunakan domain dan subdomain dengan target keyword yang di lombakan (tidak menjadikan 6 - 7 frase keyword yang dilombakan sebagai domain atau subdomain) Dilarang mengisi konten dengan kata kunci yang bersifat SARA, Pornografi atau tindakan yang melanggar hukum dan tidak sesuai dengan tema lomba Setiap peserta wajib memasang banner/logo kontes Alfamart SEO Blog Contest 2014 di blog/website yang didaftarkan Setiap domain terdaftar diperbolehkan menggunakan 1 artikel Alfamart SEO Blog Contest 2014 Peserta diwajibkan mencantumkan URL konten lomba pada form Pendaftaran. Jika anda mendaftarkan URL blog/web yang berisikan info kontes ini, tidak akan kami anggap itu sebagai peserta Tidak diperkenankan menggunakan teknik Blackhat Para peserta diwajibkan mengikuti peraturan dan materi lomba Mengisi informasi Nama, Alamat, Nomer HP, No Member, Email yang valid, Link Website / blog dan no struk belanja Alfamart minimal atau sama dengan Rp 50.000 . Bagi yang mendaftarkan diri tanpa mengisi alamat dengan jelas dan lengkap atau menggunakan Nomer HP /email yang tidak bisa dihubungi, kami meminta untuk mendaftarkan diri kembali karena pendaftaran yang tidak lengkap akan kami hapus Peraturan dapat ditambah atau diubah dari waktu ke waktu sesuai dengan feedback yang diterima dari peserta Peserta harus menaati dan menjalankan semua syarat serta ketentuan lomba tanpa terkecuali Materi Lomba : Peserta diwajibkan membuat artikel dengan menggunakan bahasa Indonesia yang baik dan benar, menggunakan tema berita dan informasi mengenai perusahaan PT. Sumber Alfaria Trijaya, Tbk, dan minimal 500 kata/artikel. Informasi bisa didapat di website www.alfamartku.com Harus mencantumkan minimal 4 linkback dengan ancor text : Alfamart official partner merchandise piala dunia 2014 (http://www.alfamartku.com) Merchandise piala dunia brazil di Toko Alfamart (http://www.alfamartku.com) Belanja baju sepak bola di Alfamart (http://www.alfamartku.com) Nuansa world cup Brazil 2014 di Alfamart (http://www.alfamartku.com) Sistem Penjurian : Penjurian dilakukan tanggal 17 April 2014 Pukul 10:00Penjurian dan pemilihan pemenang dinilai melalui posisi teratas pada Search Engine Google.co.id.Kualitas tulisan pada artikel juga termasuk pada sistem penjurianUntuk pemenang The Best Blog akan dinilai dari isi konten serta tampilan blog yang friendly secara User Experience dan User InterfaceHasil keputusan dari juri mutlak dan tidak bisa di ganggu gugatJika hasil pemenang sudah didapatkan dan ternyata pemenang tersebut tidak sepenuhnya mengikuti peraturan dan materi pada lomba akan diganti dan status pemenang akan diserahkan pada peserta lain yang memiliki materi konten dan mengikuti peraturan sesuai dengan lomba
I suspect this is a regular expression problem - and a very basic one, so apologies. In Python, if I have a string like xdtwkeltjwlkejt7wthwk89lk how can I get the index of the first digit in the string? Thanks! Use >>> import re >>> s1 = "thishasadigit4here" >>> m = re.search("\d", s1) >>> if m: ... print "Digit found at position %d" % m.start() ... else: ... print "No digit in that string" ... Digit found at position 13 >>> Seems like a good job for a parser: >>> from simpleparse.parser import Parser >>> s = 'xdtwkeltjwlkejt7wthwk89lk' >>> grammar = """ ... integer := [0-9]+ ... <alpha> := -integer+ ... all := (integer/alpha)+ ... """ >>> parser = Parser(grammar, 'all') >>> parser.parse(s) (1, [('integer', 15, 16, None), ('integer', 21, 23, None)], 25) >>> [ int(s[x[1]:x[2]]) for x in parser.parse(s)[1] ] [7, 89] import re mob = re.search('\d', 'xdtwkeltjwlkejt7wthwk89lk') if mob: print mob.start() Here is another regex-less way, more in a functional style. This one finds the position of the first occurrence of each digit that exists in the string, then chooses the lowest. A regex is probably going to be more efficient, especially for longer strings (this makes at least 10 full passes through the string and up to 20). haystack = "xdtwkeltjwlkejt7wthwk89lk" digits = "012345689" found = [haystack.index(dig) for dig in digits if dig in haystack] firstdig = min(found) if found else None As everybody has answered using regex, here is another way without regex and which may suffice in many cases s='xdtwkeltjwlkejt7wthwk89lk' for i, c in enumerate(s): if c.isdigit(): print i break output: and to get all digits and their positions, a simple expression will do, Regex is overkill here >>> [(i,c) for i,c in enumerate('xdtwkeltjwlkejt7wthwk89lk') if c.isdigit()] [(15, '7'), (21, '8'), (22, '9')] in Python 2.7+ you can create a dict of digit and its position >>> {c:i for i,c in enumerate('xdtwkeltjwlkejt7wthwk89lk') if c.isdigit()} {'9': 22, '8': 21, '7': 15} I'm sure there are multiple solutions, but using regular expressions you can do this: >>> import re >>> match = re.search("\d", "xdtwkeltjwlkejt7wthwk89lk") >>> match.start(0) 15 As the other solutions say, to find the index of the first digit in the string we can use regular expressions: >>> s = 'xdtwkeltjwlkejt7wthwk89lk' >>> match = re.search(r'\d', s) >>> print match.start() if match else 'No digits found' 15 >>> s[15] # To show correctness '7' While simple, a regular expression match is going to be overkill for super-long strings. A more efficient way is to iterate through the string like this: >>> for i, c in enumerate(s): ... if c.isdigit(): ... print i ... break ... 15 In case we wanted to extend the question to finding the first >>> s = 'xdtwkeltjwlkejt711wthwk89lk' >>> for i, c in enumerate(s): ... if c.isdigit(): ... start = i ... while i < len(s) and s[i].isdigit(): ... i += 1 ... print 'Integer %d found at position %d' % (int(s[start:i]), start) ... break ... Integer 711 found at position 15 you can use regular expression import re y = "xdtwkeltjwlkejt7wthwk89lk" s = re.search("\d",y).start() Thought I'd toss my method on the pile. I'll do just about anything to avoid regex. sequence = 'xdtwkeltjwlkejt7wthwk89lk' i = [x.isdigit() for x in sequence].index(True) To explain what's going on here: One of my colleagues had a really awesome answer to this: import re result = " Total files:................... 90" match = re.match(r".*[^\d](\d+)$", result) if match: print match.group(1) def first_digit_index(iterable): try: return next(i for i, d in enumerate(iterable) if d.isdigit()) except StopIteration: return -1 This does not use regex and will stop iterating as soon as the first digit is found.
In Python 2.x I'm able to do this: >>> '4f6c6567'.decode('hex_codec') 'Oleg' But in Python 3.2 I encounter this error: >>> b'4f6c6567'.decode('hex_codec') Traceback (most recent call last): File "<pyshell#25>", line 1, in <module> b'4f6c6567'.decode('hex_codec') TypeError: decoder did not return a str object (type=bytes) According to the docs hex_codec should provide "bytes-to-bytes mappings". So the object of byte-string is correctly used here. How can I get rid of this error to be able to avoid unwieldy workarounds to convert from hex-encoded text?
I would like to invoke the following code in-situ wherever I refer to MY_MACRO in my code below. # MY_MACRO frameinfo = getframeinfo(currentframe()) msg = 'We are on file ' + frameinfo.filename + ' and line ' + str(frameinfo.lineno) # Assumes access to namespace and the variables in which `MY_MACRO` is called. current_state = locals().items() Here is some code that would use MY_MACRO: def some_function: MY_MACRO def some_other_function: some_function() MY_MACRO class some_class: def some_method: MY_MACRO In case it helps: One of the reasons why I would like to have this ability is because I would like to avoid repeating the code of MY_MACROwherever I need it. Having something short and easy would be very helpful. Another reason is because I want to embed an IPython shell wihthin the macro and I would like to have access to all variables in locals().items()(see this other question) Is this at all possible in Python? What would be the easiest way to get this to work? Please NOTE that the macro assumes access to the entire namespace of the scope in which it's called (i.e. merely placing the code MY_MACRO in a function would not work). Note also that if I place MY_MACRO in a function, lineno would output the wrong line number.
Hada de la Luna [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Bonjour, pour une personne qui a des problèmes avec la luminosité excessive de l'écran, j'aimerais savoir comment régler cela de façon "fixe" qui ne soit pas remise en question à chaque démarrage. En effet, en utilisant : Paramètres système > Luminosité et verrouillage La personne baisse au minimum possible puis au démarrage suivant s'explose ce qui lui reste de vision car l'écran est de nouveau au maximum de luminosité... Et rebelote le réglage... Comment faire pour que cela ne bouge plus "tout seul" ? Dernière modification par Hada de la Luna (Le 22/12/2012, à 00:26) Hada de la Luna :o) Hors ligne DAnGk41 Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" En ce qui me concerne, j'ai cette option sur mon écran, donc je l'ai réglé directement sur celui-ci. promis, je vais penser à me moderniser ==> http://i43.servimg.com/u/f43/11/83/24/66/pb090210.jpg Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Certes mais ce n'est pas possible sur un portable... Hada de la Luna :o) Hors ligne Zakhar Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Si ça n'est pas mémorisé avec les outils graphiques, économie d'énergie et autres, le principe est de trouver la ligne de commande qui fait ça et la rajouter au démarrage de session. Personnellement j'avais un problème de fréquence d'écran (oui j'ai encore un vieil écran cathodique et ces machins là ça a plusieurs fréquences contrairement aux LCD), et je l'ai réglé ainsi par un : /usr/bin/xrandr --output default --mode 1280x1024 à chaque ouverture de session (dans les programmes qui se lancent automatiquement au démarrage de session) Dernière modification par Zakhar (Le 14/10/2012, à 16:27) "A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald) Hors ligne Teromene Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Une technique un peu bourrin mais qui marche est de lancer au démarrage ce script python avec le pourcentage de luminosité en paramètre : #!/usr/bin/python import dbus import getopt , sys bus = dbus.SessionBus() proxy = bus.get_object('org.gnome.SettingsDaemon', '/org/gnome/SettingsDaemon/Power') iface = dbus.\ Interface(proxy, dbus_interface='org.gnome.SettingsDaemon.Power.Screen') opts, args = getopt.getopt(sys.argv[1:], "ho:v", ["help", "output="]) iface.SetPercentage(args[0]) Dernière modification par Teromene (Le 15/10/2012, à 17:22) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Merci ! Par contre, ne sachant pas programmer, je ne comprend même pas ce qu'il faut faire là... Pouvez vous détailler, pas à pas ? Hada de la Luna :o) Hors ligne Teromene Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Il faut enregistrer ce code dans un fichier et tu le rend exécutable ( Clic droit - Propriétés - Onglet Permissions puis cliquer sur "Autoriser l’exécution du fichier comme programme) Puis aller dans Programmes au Démarrage ( Système → Préférences → Applications au démarrage ) faire ajouter puis dans commande faire parcourir sélectionner l'endroit au est installé le programme cliquer sur ok et ajouter le pourcentage à la fin de la ligne de texte qui est apparue. !! Marche uniquement pour gnome et Unity ! ! Hors ligne mikeshlinux Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Il faut enregistrer ce code dans un fichier et tu le rend exécutable ( Clic droit - Propriétés - Onglet Permissions puis cliquer sur "Autoriser l’exécution du fichier comme programme) Puis aller dans Programmes au Démarrage ( Système → Préférences → Applications au démarrage ) faire ajouter puis dans commande faire parcourir sélectionner l'endroit au est installé le programme cliquer sur ok et ajouter le pourcentage à la fin de la ligne de texte qui est apparue. !! Marche uniquement pour gnome et Unity ! ! Bonjour, J'utilise F-lux, mais en ce moment je ne suis pas sur qu'il fonctionne correctement sur ma version 12.04. J'ai le même souci et j'ai reglé le contraste de mon écran. Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Il faut enregistrer ce code dans un fichier et tu le rend exécutable ( Clic droit - Propriétés - Onglet Permissions puis cliquer sur "Autoriser l’exécution du fichier comme programme) Puis aller dans Programmes au Démarrage ( Système → Préférences → Applications au démarrage ) faire ajouter puis dans commande faire parcourir sélectionner l'endroit au est installé le programme cliquer sur ok et ajouter le pourcentage à la fin de la ligne de texte qui est apparue. !! Marche uniquement pour gnome et Unity ! ! Juste pour dire que cela fonctionne parfaitement ! Hada de la Luna :o) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Bonsoir, en fait, il semble que cela ne fonctionne pas systématiquement... Des fois oui et des fois non... Hada de la Luna :o) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" En fait, j'ai l'impression que cela ne fonctionne que lorsque ça redémarre, si on éteint tout et qu'on allume, la luminosité est au max... Hada de la Luna :o) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Des idées alternatives ? De plus le logiciel F-lux n'existe pas dans ma logithèque... Hada de la Luna :o) Hors ligne richardgilbert Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" J'ai résolu mon problème avec: sudo apt-get install xbacklight Aplications au demarrage => ajouter xbacklight =25 Debian, Ubuntu, ElementaryOS, Voyager Linux, Lubuntu & Crunchbang. Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Merci, j'ai mis ça en place, je te tiendrais au courant si cela fonctionne... Hada de la Luna :o) Hors ligne Major Grubert Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" J'ai ça qui date un peu mais c'est peut-être tjs valable : http://major.grubert.free.fr/index.php? … permanente Pour en savoir plus sur Gnome Shell. . Lenovo Yoga 13 : Windows 8.1 / Ubuntu Gnome 14.04 . Asus x7010 : Ubuntu 14.04 (Gnome Shell) . Asus Eeepc : Xubuntu 14.04 (32b) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" xbacklight fonctionne parfaitement et sur plusieurs machines différentes... Hada de la Luna :o) Hors ligne
I tested some different algorithms, measuring speed and number of collisions. I used three different key sets: For each corpus, the number of collisions and the average time spent hashing was recorded. I tested: Results Each result contains the average hash time, and the number of collisions Hash Lowercase Random UUID Numbers ============= ============= =========== ============== Murmur 145 ns 259 ns 92 ns 6 collis 5 collis 0 collis FNV-1a 152 ns 504 ns 86 ns 4 collis 4 collis 0 collis FNV-1 184 ns 730 ns 92 ns 1 collis 5 collis 0 collis* DBJ2a 158 ns 443 ns 91 ns 5 collis 6 collis 0 collis*** DJB2 156 ns 437 ns 93 ns 7 collis 6 collis 0 collis*** SDBM 148 ns 484 ns 90 ns 4 collis 6 collis 0 collis** SuperFastHash 164 ns 344 ns 118 ns 85 collis 4 collis 18742 collis CRC32 250 ns 946 ns 130 ns 2 collis 0 collis 0 collis LoseLose 338 ns - - 215178 collis Notes: Do collisions actually happen? Yes. I started writing my test program to see if hash collisions actually happen - and are not just a theoretical construct. They do indeed happen: FNV-1 collisions creamwove collides with quists FNV-1a collisions costarring collides with liquid declinate collides with macallums altarage collides with zinke altarages collides with zinkes Murmur2 collisions cataract collides with periti roquette collides with skivie shawl collides with stormbound dowlases collides with tramontane cricketings collides with twanger longans collides with whigs DJB2 collisions hetairas collides with mentioner heliotropes collides with neurospora depravement collides with serafins stylist collides with subgenera joyful collides with synaphea redescribed collides with urites dram collides with vivency DJB2a collisions haggadot collides with loathsomenesses adorablenesses collides with rentability playwright collides with snush playwrighting collides with snushing treponematoses collides with waterbeds CRC32 collisions codding collides with gnu exhibiters collides with schlager SuperFastHash collisions dahabiah collides with drapability encharm collides with enclave grahams collides with gramary ...snip 79 collisions... night collides with vigil nights collides with vigils finks collides with vinic Randomnessification The other subjective measure is how randomly distributed the hashes are. Mapping the resulting HashTables shows how evenly the data is distributed. All the hash functions show good distribution when mapping the table linearly: Or as a Hilbert Map (XKCD is always relevant): Except when hashing number strings ("1", "2", ..., "216553") (for example, zip codes), where patterns begin to emerge in most of the hashing algorithms: SDBM: DJB2a: FNV-1: All except FNV-1a, which still look plenty random to me: In fact, Murmur2 seems to have even better randomness with Numbers than FNV-1a: When I look at the FNV-1a "number" map, I think I see subtle vertical patterns. With Murmur I see no patterns at all. What do you think? The extra * in the above table denotes how bad the randomness is. With FNV-1a being the best, and DJB2x being the worst: Murmur2: FNV-1a: FNV-1: * DJB2: ** DJB2a: ** SDBM: *** SuperFastHash: CRC: ******************* Loselose: ************************** * ************* ****************************** ****************************************************************** I originally wrote this program to decide if I even had to worry about collisions: I do. And then it turned into making sure that the hash functions were sufficiently random. FNV-1a algorithm The FNV1 hash comes in variants that return 32, 64, 128, 256, 512 and 1024 bit hashes. The FNV-1a algorithm is: hash = FNV_offset_basis for each octetOfData to be hashed hash = hash xor octetOfData hash = hash * FNV_prime return hash Where the constants FNV_offset_basis and FNV_prime depend on the return hash size you want: Hash Size Prime Offset =========== =========================== ================================= 32-bit 16777619 2166136261 64-bit 1099511628211 14695981039346656037 128-bit 309485009821345068724781371 144066263297769815596495629667062367629 256-bit prime: 2^88 + 2^8 + 0x3b = 309485009821345068724781371 offset: 100029257958052580907070968620625704837092796014241193945225284501741471925557 512-bit prime: 2^344 + 2^8 + 0x57 = 35835915874844867368919076489095108449946327955754392558399825615420669938882575126094039892345713852759 offset: 9659303129496669498009435400716310466090418745672637896108374329434462657994582932197716438449813051892206539805784495328239340083876191928701583869517785 1024-bit prime: 2^680 + 2^8 + 0x8d = 5016456510113118655434598811035278955030765345404790744303017523831112055108147451509157692220295382716162651878526895249385292291816524375083746691371804094271873160484737966720260389217684476157468082573 offset: 1419779506494762106872207064140321832088062279544193396087847491461758272325229673230371772250864096521202355549365628174669108571814760471015076148029755969804077320157692458563003215304957150157403644460363550505412711285966361610267868082893823963790439336411086884584107735010676915 See the main FNV page for details. As a practical matter: 32-bit UInt32, 64-bit UInt64, and 128-bit Guid can be useful All my results are with the 32-bit variant. FNV-1 better than FNV-1a? No. FNV-1a is all around better. There was more collisions with FNV-1a when using the English word corpus: Hash Word Collisions ====== =============== FNV-1 1 FNV-1a 4 Now compare lowercase and uppercase: Hash lowercase word Collisions UPPERCASE word collisions ====== ========================= ========================= FNV-1 1 9 FNV-1a 4 11 In this case FNV-1a isn't "400%" worse than FN-1, only 20% worse. I think the more important takeaway is that there are two classes of algorithms when it comes to collisions: collisions rare: FNV-1, FNV-1a, DJB2, DJB2a, SDBM collisions common: SuperFastHash, Loselose And then there's the how evenly distributed the hashes are: outstanding distribution: Murmur2, FNV-1a, SuperFastHas excellent distribution: FNV-1 good distribution: SDBM, DJB2, DJB2a horrible distribution: Loselose Update Murmur? Sure, why not Update @whatshisname wondered how a CRC32 would perform, added numbers to the table. CRC32 is pretty good. Few collisions, but slower, and the overhead of a 1k lookup table. Snip all erroneous stuff about CRC distribution - my bad Up until today I was going to use FNV-1a as my de facto hash-table hashing algorithm. But now I'm switching to Murmur2: Faster Better randomnessification of all classes of input And I really, really hope there's something wrong with the SuperFastHash algorithm I found; it's too bad to be as popular as it is. Update: From the MurmurHash3 homepage on Google: (1) - SuperFastHash has very poor collision properties, which have been documented elsewhere. So I guess it's not just me. Update: I realized why Murmur is faster than the others. MurmurHash2 operates on four bytes at a time. Most algorithms are byte by byte: for each octet in Key AddTheOctetToTheHash This means that as keys get longer Murmur gets its chance to shine. Update A timely post by Raymond Chen reiterates the fact that "random" GUIDs are not meant to be used for their randomness. They, or a subset of them, are unsuitable as a hash key: Even the Version 4 GUID algorithm is not guaranteed to be unpredictable, because the algorithm does not specify the quality of the random number generator. The Wikipedia article for GUID contains primary research which suggests that future and previous GUIDs can be predicted based on knowledge of the random number generator state, since the generator is not cryptographically strong. Randomess is not the same as collision avoidance; which is why it would be a mistake to try to invent your own "hashing" algorithm by taking some subset of a "random" guid: int HashKeyFromGuid(Guid type4uuid) { //A "4" is put somewhere in the GUID. //I can't remember exactly where, but it doesn't matter for //the illustrative purposes of this pseudocode int guidVersion = ((type4uuid.D3 & 0x0f00) >> 8); Assert(guidVersion == 4); return (int)GetFirstFourBytesOfGuid(type4uuid); } Note: Again, I put "random GUID" in quotes, because it's the "random" variant of GUIDs. A more accurate description would be Type 4 UUID. But nobody knows what type 4, or types 1, 3 and 5 are. So it's just easier to call them "random" GUIDs. All English Words mirrors
I am using the class based generic views. class MyView(UpdateView): model = MyModel success_url = "/test/list" Now this is working fine. But i want to make the parent class so that my all views inherit from it and define success_url there like this class MyMixin(object): def __init__(self, *args, **kwargs): self.success_url ="/test/list?myvar=true" then class MyView(UpdateView, MyMixin): model = MyModel success_url = "/test/list" But my success_url is not overridden
I'm using SQLAlchemy import hashlib import sqlalchemy as sa from sqlalchemy import orm from allsun.model import meta t_user = sa.Table("users",meta.metadata,autoload=True) class Duplicat(Exception): pass class LoginExistsException(Exception): pass class EmailExistsException(Exception): pass class User(object): """ def __setattr__(self, key, value): if key=='password' : value=unicode(hashlib.sha512(value).hexdigset()) object.__setattr__(self,key,value) """ def loginExists(self): try: meta.Session.query(User).filter(User.login==self.login).one() except orm.exc.NoResultFound: pass else: raise LoginExistsException() def emailExists(self): try: meta.Session.query(User).filter(User.email==self.email).one() except orm.exc.NoResultFound: pass else: raise EmailExistsException() def save(self): meta.Session.begin() meta.Session.save(self) try: meta.Session.commit() except sa.exc.IntegrityError: raise Duplicat() How can I get the last inserted id when I call: user = User() user.login = request.params['login'] user.password = hashlib.sha512(request.params['password']).hexdigest() user.email = request.params['email'] user.save()
Alright, now that I've got some time to post my experience after figuring this all out, and thanks to Craig for getting me on the right track. Here's what you need to do (at least with SVN v1.4 and Trac v0.10.3): Locate your SVN repository that you want to enable the Post Commit Hook for. inside the SVN repository there's a directory called hooks, this is where you'll be placing the post commit hook. create a file post-commit.bat (this is the batch file that's automatically called by SVN post commit). Place the following code inside the post-commit.bat file ( this will call your post commit cmd file passing in the parameters that SVN automatically passes %1 is the repository, %2 is the revision that was committed. %~dp0\trac-post-commit-hook.cmd %1 %2 Now create the trac-post-commit-hook.cmd file as follows: @ECHO OFF :: :: Trac post-commit-hook script for Windows :: :: Contributed by markus, modified by cboos. :: Usage: :: :: 1) Insert the following line in your post-commit.bat script :: :: call %~dp0\trac-post-commit-hook.cmd %1 %2 :: :: 2) Check the 'Modify paths' section below, be sure to set at least TRAC_ENV :: ---------------------------------------------------------- :: Modify paths here: :: -- this one must be set SET TRAC_ENV=C:\trac\MySpecialProject :: -- set if Python is not in the system path :: SET PYTHON_PATH= :: -- set to the folder containing trac/ if installed in a non-standard location :: SET TRAC_PATH= :: ---------------------------------------------------------- :: Do not execute hook if trac environment does not exist IF NOT EXIST %TRAC_ENV% GOTO :EOF set PATH=%PYTHON_PATH%;%PATH% set PYTHONPATH=%TRAC_PATH%;%PYTHONPATH% SET REV=%2 :: GET THE AUTHOR AND THE LOG MESSAGE for /F %%A in ('svnlook author -r %REV% %1') do set AUTHOR=%%A for /F "delims==" %%B in ('svnlook log -r %REV% %1') do set LOG=%%B :: CALL THE PYTHON SCRIPT Python "%~dp0\trac-post-commit-hook" -p "%TRAC_ENV%" -r "%REV%" -u "%AUTHOR%" -m "%LOG%" The most important parts here are to set your TRAC_ENV which is the path to the repository root (SET TRAC_ENV=C:\trac\MySpecialProject) The next MAJORLY IMPORTANT THING in this script is to do the following: :: GET THE AUTHOR AND THE LOG MESSAGE for /F %%A in ('svnlook author -r %REV% %1') do set AUTHOR=%%A for /F "delims==" %%B in ('svnlook log -r %REV% %1') do set LOG=%%B if you see in the script file above I'm using svnlook (which is a command line utility with SVN) to get the LOG message and the author that made the commit to the repository. Then, the next line of the script is actually calling the Python code to perform the closing of the tickets and parse the log message. I had to modify this to pass in the Log message and the author (which the usernames I use in Trac match the usernames in SVN so that was easy). CALL THE PYTHON SCRIPT Python "%~dp0\trac-post-commit-hook" -p "%TRAC_ENV%" -r "%REV%" -u "%AUTHOR%" -m "%LOG%" The above line in the script will pass into the python script the Trac Environment, the revision, the person that made the commit, and their comment. Here's the Python script that I used. One thing that I did additional to the regular script is we use a custom field (fixed_in_ver) which is used by our QA team to tell if the fix they're validating is in the version of code that they're testing in QA. So, I modified the code in the python script to update that field on the ticket. You can remove that code as you won't need it, but it's a good example of what you can do to update custom fields in Trac if you also want to do that. I did that by having the users optionally include in their comment something like: (version 2.1.2223.0) I then use the same technique that the python script uses with regular expressions to get the information out. It wasn't too bad. Anyway, here's the python script I used, Hopefully this is a good tutorial on exactly what I did to get it to work in the windows world so you all can leverage this in your own shop... If you don't want to deal with my additional code for updating the custom field, get the base script from this location as mentioned by Craig above (Script From Edgewall) #!/usr/bin/env python # trac-post-commit-hook # ---------------------------------------------------------------------------- # Copyright (c) 2004 Stephen Hansen # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # ---------------------------------------------------------------------------- # This Subversion post-commit hook script is meant to interface to the # Trac (http://www.edgewall.com/products/trac/) issue tracking/wiki/etc # system. # # It should be called from the 'post-commit' script in Subversion, such as # via: # # REPOS="$1" # REV="$2" # LOG=`/usr/bin/svnlook log -r $REV $REPOS` # AUTHOR=`/usr/bin/svnlook author -r $REV $REPOS` # TRAC_ENV='/somewhere/trac/project/' # TRAC_URL='http://trac.mysite.com/project/' # # /usr/bin/python /usr/local/src/trac/contrib/trac-post-commit-hook \ # -p "$TRAC_ENV" \ # -r "$REV" \ # -u "$AUTHOR" \ # -m "$LOG" \ # -s "$TRAC_URL" # # It searches commit messages for text in the form of: # command #1 # command #1, #2 # command #1 & #2 # command #1 and #2 # # You can have more then one command in a message. The following commands # are supported. There is more then one spelling for each command, to make # this as user-friendly as possible. # # closes, fixes # The specified issue numbers are closed with the contents of this # commit message being added to it. # references, refs, addresses, re # The specified issue numbers are left in their current status, but # the contents of this commit message are added to their notes. # # A fairly complicated example of what you can do is with a commit message # of: # # Changed blah and foo to do this or that. Fixes #10 and #12, and refs #12. # # This will close #10 and #12, and add a note to #12. import re import os import sys import time from trac.env import open_environment from trac.ticket.notification import TicketNotifyEmail from trac.ticket import Ticket from trac.ticket.web_ui import TicketModule # TODO: move grouped_changelog_entries to model.py from trac.util.text import to_unicode from trac.web.href import Href try: from optparse import OptionParser except ImportError: try: from optik import OptionParser except ImportError: raise ImportError, 'Requires Python 2.3 or the Optik option parsing library.' parser = OptionParser() parser.add_option('-e', '--require-envelope', dest='env', default='', help='Require commands to be enclosed in an envelope. If -e[], ' 'then commands must be in the form of [closes #4]. Must ' 'be two characters.') parser.add_option('-p', '--project', dest='project', help='Path to the Trac project.') parser.add_option('-r', '--revision', dest='rev', help='Repository revision number.') parser.add_option('-u', '--user', dest='user', help='The user who is responsible for this action') parser.add_option('-m', '--msg', dest='msg', help='The log message to search.') parser.add_option('-c', '--encoding', dest='encoding', help='The encoding used by the log message.') parser.add_option('-s', '--siteurl', dest='url', help='The base URL to the project\'s trac website (to which ' '/ticket/## is appended). If this is not specified, ' 'the project URL from trac.ini will be used.') (options, args) = parser.parse_args(sys.argv[1:]) if options.env: leftEnv = '\\' + options.env[0] rghtEnv = '\\' + options.env[1] else: leftEnv = '' rghtEnv = '' commandPattern = re.compile(leftEnv + r'(?P<action>[A-Za-z]*).?(?P<ticket>#[0-9]+(?:(?:[, &]*|[ ]?and[ ]?)#[0-9]+)*)' + rghtEnv) ticketPattern = re.compile(r'#([0-9]*)') versionPattern = re.compile(r"\(version[ ]+(?P<version>([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+))\)") class CommitHook: _supported_cmds = {'close': '_cmdClose', 'closed': '_cmdClose', 'closes': '_cmdClose', 'fix': '_cmdClose', 'fixed': '_cmdClose', 'fixes': '_cmdClose', 'addresses': '_cmdRefs', 're': '_cmdRefs', 'references': '_cmdRefs', 'refs': '_cmdRefs', 'see': '_cmdRefs'} def __init__(self, project=options.project, author=options.user, rev=options.rev, msg=options.msg, url=options.url, encoding=options.encoding): msg = to_unicode(msg, encoding) self.author = author self.rev = rev self.msg = "(In [%s]) %s" % (rev, msg) self.now = int(time.time()) self.env = open_environment(project) if url is None: url = self.env.config.get('project', 'url') self.env.href = Href(url) self.env.abs_href = Href(url) cmdGroups = commandPattern.findall(msg) tickets = {} for cmd, tkts in cmdGroups: funcname = CommitHook._supported_cmds.get(cmd.lower(), '') if funcname: for tkt_id in ticketPattern.findall(tkts): func = getattr(self, funcname) tickets.setdefault(tkt_id, []).append(func) for tkt_id, cmds in tickets.iteritems(): try: db = self.env.get_db_cnx() ticket = Ticket(self.env, int(tkt_id), db) for cmd in cmds: cmd(ticket) # determine sequence number... cnum = 0 tm = TicketModule(self.env) for change in tm.grouped_changelog_entries(ticket, db): if change['permanent']: cnum += 1 # get the version number from the checkin... and update the ticket with it. version = versionPattern.search(msg) if version != None and version.group("version") != None: ticket['fixed_in_ver'] = version.group("version") ticket.save_changes(self.author, self.msg, self.now, db, cnum+1) db.commit() tn = TicketNotifyEmail(self.env) tn.notify(ticket, newticket=0, modtime=self.now) except Exception, e: # import traceback # traceback.print_exc(file=sys.stderr) print>>sys.stderr, 'Unexpected error while processing ticket ' \ 'ID %s: %s' % (tkt_id, e) def _cmdClose(self, ticket): ticket['status'] = 'closed' ticket['resolution'] = 'fixed' def _cmdRefs(self, ticket): pass if __name__ == "__main__": if len(sys.argv) < 5: print "For usage: %s --help" % (sys.argv[0]) else: CommitHook()
I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output. I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output) from subprocess import Popen, PIPE, STDOUT p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE, stderr = STDOUT, shell = True) for line in p.stdout: print line.replace('\n', '') After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered. At this point I was starting to grasp for straws, so I wrote the following output loop: while True: try: print p.stdout.next().replace('\n', '') except StopIteration: break but got the same result. Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?
Learning From Mistakes 12/13/2000 Shortly after I wrote my news article on the Python wiki program MoinMoin, Jürgen Hermann announced a new version with this notice: This is a security update, which explains the short release cycle. It replaces someexec()calls by__import__(), which is much safer (or actually, safe in contrast to totally unsafe). IF YOU HAVE INSTALLED RELEASE 0.5 OR 0.6, UPGRADE NOW! Because of the short release cycle, the code couldn't be very different between 0.6 and 0.7. That would make changes easy to spot. I love learning opportunities. I got a copy of 0.6 to compare with 0.7. There are several places in MoinMoin where Hermann wanted to use the same code but choose between different libraries of functions to be used by that code. This is called a plug-in. Hermann uses plug-ins to select formatters based on mime-type, or to select a particular parser or extension macro set based on the type of information submitted from a form. The name of the plug-in module to use is taken from information passed by CGI. For example, here is the code used to import a formatter: exec("from formatter.%s import Formatter" % (string.replace(mimetype, '/', '_'),)) However, mimetype is taken from information passed to the program from an untrustworthy outside source. This may not seem like a big security leak, but any leak is a dangerous thing. Someone could monkey with the string passed to mimetype. The exec function will execute whatever code is passed to it. Using exec, you might unwittingly execute a dangerous block of code. Here is how Hermann fixed it. He replaced these exec strings with this utility function def importName(modulename, name): """ Import a named object from a module in the context of this function, i.e. use fully qualified module paths. Return None on failure. """ try: module = __import__(modulename, globals(), locals(), [name]) except ImportError: return None return vars(module)[name] The __import__ function only imports the specified code. If the mimetype (passed to this function as modulename) were manipulated now, you would only get an import error, not a surprise intruder. It's errors like these that the taint mode in Perl was designed to catch. In this paranoid mode, information received from outside the program, like the string assigned to mimetype in this case, is rejected as tainted unless it is first checked by a regular expression. It throws in an extra step designed to make you stop and think before leaving the barn door open. While Python doesn't have an equivalent to taint mode, it does have a module to help you restrict what might be dangerous code, the restricted execution or RExec module. The RExec module is not needed in this instance because Hermann only needs to import installed code, not evaluate user supplied code. For this purpose, the __import__ function does nicely. If, however, you work with a lot of CGI, you should study up on RExec. Stephen Figgins administrates Linux servers for Sunflower Broadband, a cable company. Read more Python News columns. Discuss this article in the O'Reilly Network Python Forum. Return to the Python DevCenter.
According to the documentation, they're pretty much interchangeable. Is there a stylistic reason to use one over the other? I like to use double quotes around strings that are used for interpolation or that are natural language messages, and single quotes for small symbol-like strings, but will break the rules if the strings contain quotes, or if I forget. I use triple double quotes for docstrings and raw string literals for regular expressions even if they aren't needed. For example: LIGHT_MESSAGES = { 'English': "There are %(number_of_lights)s lights.", 'Pirate': "Arr! Thar be %(number_of_lights)s lights." } def lights_message(language, number_of_lights): """Return a language-appropriate string reporting the light count.""" return LIGHT_MESSAGES[language] % locals() def is_pirate(message): """Return True if the given message sounds piratical.""" return re.search(r"(?i)(arr|avast|yohoho)!", message) is not None Quoting the official docs at https://docs.python.org/2.0/ref/strings.html: So there is no difference. Instead, people will tell you to choose whichever style that matches the context, I prefer I'm with Will: I'll stick with that even if it means a lot of escaping. I get the most value out of single quoted identifiers standing out because of the quotes. The rest of the practices are there just to give those single quoted identifiers some standing room. If the string you have contains one, then you should use the other. For example, If your code is going to be read by people who work with C/C++ (or if you switch between those languages and Python), then using The Python code I've seen in the wild tends to favour Triple quoted comments are an interesting subtopic of this question. PEP 257 specifies triple quotes for doc strings. I did a quick check using Google Code Search and found that triple double quotes in Python are about 10x as popular as triple single quotes -- 1.3M vs 131K occurrences in the code Google indexes. So in the multi line case your code is probably going to be more familiar to people if it uses triple double quotes. For that simple reason, I always use double quotes on the outside. Speaking of fluff, what good is streamlining your string literals with ' if you're going to have to use escape characters to represent apostrophes? Does it offend coders to read novels? I can't imagine how painful high school English class was for you! Python uses quotes something like this: mystringliteral1="this is a string with 'quotes'" mystringliteral2='this is a string with "quotes"' mystringliteral3="""this is a string with "quotes" and more 'quotes'""" mystringliteral4='''this is a string with 'quotes' and more "quotes"''' mystringliteral5='this is a string with \"quotes\"' mystringliteral6='this is a string with \042quotes\042' mystringliteral6='this is a string with \047quotes\047' print mystringliteral1 print mystringliteral2 print mystringliteral3 print mystringliteral4 print mystringliteral5 print mystringliteral6 Which gives the following output: I use double quotes in general, but not for any specific reason - Probably just out of habit from Java. I guess you're also more likely to want apostrophes in an inline literal string than you are to want double quotes. Personally I stick with one or the other. It doesn't matter. And providing your own meaning to either quote is just to confuse other people when you collaborate. It's probably a stylistic preference more than anything. I just checked PEP 8 and didn't see any mention of single versus double quotes. I prefer single quotes because its only one keystroke instead of two. That is, I don't have to mash the shift key to make single quote. PHP makes the same distinction as Perl: content in single quotes will not be interpreted (not even \n will be converted), as opposed to double quotes which can contain variables to have their value printed out. Python does not, I'm afraid. Technically seen, there is no $ token (or the like) to separate a name/text from a variable in Python. Both features make Python more readable, less confusing, after all. Single and double quotes can be used interchangeably in Python. I just use whatever strikes my fancy at the time; it's convenient to be able to switch between the two at a whim! Of course, when quoting quote characetrs, switching between the two might not be so whimsical after all... Your team's taste or your project's coding guidelines. If you are in a multilanguage environment, you might wish to encourage the use of the same type of quotes for strings that the other language uses, for instance. Else, I personally like best the look of ' None as far as I know. Although if you look at some code, " " is commonly used for strings of text (I guess ' is more common inside text than "), and ' ' appears in hashkeys and things like that. I aim to minimize both pixels and surprise. I typically prefer Perhaps it helps to think of the pixel minimization philosophy in the following way. Would you rather that English characters looked like I use double quotes because I have been doing so for years in most languages (C++, Java, VB…) except Bash, because I also use double quotes in normal text and because I'm using a (modified) non-English keyboard where both characters require the shift key. example : f = open('c:\word.txt', 'r') f = open("c:\word.txt", "r") f = open("c:/word.txt", "r") f = open("c:\\\word.txt", "r") Results are the same =>> no, they're not the same.A single backslash will escape characters. You just happen to luck out in that example because If you want to use single backslashes (and have them interpreted as such), then you need to use a "raw" string. You can do this by putting an ' im_raw = r'c:\temp.txt' non_raw = 'c:\\temp.txt' another_way = 'c:/temp.txt' As far as paths in Windows are concerned, forward slashes are interpreted the same way. Clearly the string itself is different though. I wouldn't guarantee that they're handled this way on an external device though.
For a project that I’m considering I’ve spent a few hours looking into using Python to access GMail. There’s a nice Python library called libgmail, but it’s a bit overkill since all I want is to see how many unread emails I have. After a bit of searching I found ot that there’s an atom feed that can be used to do exactly that, https://mail.google.com/mail/feed/atom. It uses basic authentication, your GMail username and password. So, I started looking at Python and HTTP. urllib2 seemed to fit the bill. It has a class called HTTPBasicAuthHandler to do the authentication and everything. I used the following code (entered in ipython of course): import urllib2 req = urllib2.Request('https://mail.google.com/mail/feed/atom') try: h = urllib2.urlopen(req) except IOError, e: pass e.headers['www-authenticate'] It should produce something like 'BASIC realm="New mail feed"'. Now we can do the proper connection, pull down the atom entry. I opted to use elementtree since it’s so easy to use. Here’s the full code I put together for this little experiment: import urllib2 from elementtree.ElementTree import fromstring ah = urllib2.HTTPBasicAuthHandler() ah.add_password('New mail feed', 'https://mail.google.com', \ 'user@gmail.com', 'password') op = urllib2.build_opener(ah) urllib2.install_opener(op) res = urllib2.urlopen('https://mail.google.com/mail/feed/atom') lines = ''.join(res.readlines()) e = fromstring(lines) fc = e.find('{http://purl.org/atom/ns#}fullcount') print 'You have %i unread mail(s) in your GMail account' % int(fc.text)
Lets say I have path to a module in a string module_to_be_imported = 'a.b.module' How can I import it ? >>> m = __import__('xml.sax') >>> m.__name__ 'xml' >>> m = __import__('xml.sax', fromlist=['']) >>> m.__name__ 'xml.sax' You can use the build-in import sys myconfigfile = sys.argv[1] try: config = __import__(myconfigfile) for i in config.__dict__: print i except ImportError: print "Unable to import configuration file %s" % (myconfigfile,) For more information, see: x = __import__('a.b.module', fromlist=[''])
Lesson Goals In this two-part lesson, we will build on what you’ve learned aboutWorking with Webpages, learning how to remove the HTML markup fromthe webpage of Benjamin Bowsey’s 1780 criminal trial transcript. Wewill achieve this by using a variety of string operators, string methodsand close reading skills. We introduce looping and branching so thatprograms can repeat tasks and test for certain conditions, making itpossible to separate the content from the HTML tags. Finally, we convertcontent from a long string to a list of words that can later be sorted,indexed, and counted. The Challenge To get a clearer picture of the task ahead, open theobo-t17800628-33.html file that you created in Working with WebPages (or download and save the trial if you do not already have acopy), then look at the HTML source by clicking onTools -> Web Developer -> Page Source. As you scroll through thesource code you’ll notice that there are a few HTML tags mixed in withthe text. Because this is a printable version there is far less HTMLthan you will find on the other versions of the transcript (see theHTML and XML versions to compare). While not mandatory, werecommend that at this point you take the W3 Schools HTML tutorialto familiarize yourself with HTML markup. If your work often requiresthat you remove HTML markup, it will certainly help to be able tounderstand it when you see it. Files Needed For This Lesson obo-t17800628-33.html If you do not have these files, you can download programming-historian-2, the (zip) file from the previous lesson. Devising an Algorithm Since the goal is to get rid of the HTML, the first step is to create analgorithm that returns only the text (minus the HTML tags) of thearticle. An algorithm is a procedure that has been specified in enoughdetail that it can be implemented on a computer. It helps to write youralgorithms first in plain English; it’s a great way to outline exactlywhat you want to do before diving into code. To construct this algorithmyou are going to use your close reading skills to figure out a way tocapture only the textual content of the biography. Looking at the source code of obo-t17800628-33.html you will noticethe actual transcript does not start right away. Instead there are acouple of HTML tags and some citation information. In this case: <div style="font-family:serif;"><i>Old Bailey Proceedings Online</i> (www.oldbaileyonline.org, version 6.0, 01 July 2011), June 1780, trial of BENJAMIN BOWSEY (t17800628-33).<hr/><h2>BENJAMIN BOWSEY... We are only interested in the transcript itself, not the extra metadata contained in the tags. However, you will notice that the end of the metadata corresponds with the start of the transcript. This makes the location of the metadata a potentially useful marker for isolating the transcript text. At a glance, we can see that the metadata ends with two HTML tags:<hr/><h2>. We might be able to use those to find the starting pointof our transcript text. We are lucky in this case because it turns outthat these tags are a reliable way to find the start of transcript textin the printable versions (if you want, take a look at a few otherprintable trials to check). We are also lucky because other than a fewHTML tags at the end of the transcript, there is no further informationon the page. Had there been other unrelated content, we would take asimilar approach and look for some way of isolating the end of thedesired text. Well-formatted websites will almost always have someunique way of signalling the end of the content. You often just need tolook closely. The next thing that you want to do is strip out all of the HTML markup that remains mixed in with the content. Since you know HTML tags are always found between matching pairs of angle brackets, it’s probably a safe bet that if you remove everything between angle brackets, you will remove the HTML and be left only with the transcript. Note that we are making the assumption that the transcript will not contain the mathematical symbols for “less than” or “greater than.” If Bowsey was a mathematician, this assumption would not be as safe. The following describes our algorithm in words. To isolate the content: Download the transcript text Search the HTML for and store the location of <hr/><h2> Save everything after the <hr/><h2>tags to a string:pageContents At this point we have the trial transcript text, plus HTML markup. Next: Look at every character in the pageContentsstring, one character at a time If the character is a left angle bracket (<) we are now inside a tag so ignore each following character If the character is a right angle bracket (>) we are now leaving the tag; ignore the current character, but look at each following character If we’re not inside a tag, append the current character to a newvariable: text Finally: Split the text string into a list of individual words that can later be manipulated further. Isolating Desired Content The following step uses Python commands introduced in the ManipulatingStrings in Python lesson to implement the first half of thealgorithm: removing all content before the <hr/><h2> tags. To recap,the algorithm was as follows: Download the transcript text Search the HTML for and store the location of <hr/><h2> Save everything after the <hr/><h2>tags to a string:pageContents To achieve this, you will use the find string method and create a new substring containing only the desired content using the index as start point for the substring. As you work, you will be developing separate files to contain your code.One of these will be called obo.py (for “Old Bailey Online”). Thisfile is going to contain all of the code that you will want to re-use;in other words, obo.py is a module. We discussed the idea of modulesin Code Reuse and Modularity when we saved our functions togreet.py. Create a new file named obo.py and save it to yourprogramming-historian directory. We are going to use this file to keepcopies of the functions needed to process The Old Bailey Online. Type orcopy the following code into your file. # obo.py def stripTags(pageContents): startLoc = pageContents.find("<hr/><h2>") pageContents = pageContents[startLoc:] return pageContents Create a second file, trial-content.py, and save the program shownbelow. # trial-content.py import urllib2, obo url = 'http://www.oldbaileyonline.org/print.jsp?div=t17800628-33' response = urllib2.urlopen(url) HTML = response.read() print obo.stripTags(HTML) When you run trial-content.py it will get the web page for Bowsey’strial transcript, then look in the obo.py module for the stripTagsfunction. It will use that function to extract the stuff after the<hr/><h2> tags. With any luck, this should be the textual content ofthe Bowsey transcript, along with some of HTML markup. Don’t worry ifyour Command Output screen ends in a thick black line. Komodo Edit’soutput screen has a maximum number of characters it will display, afterwhich characters start literally writing over one another on the screen,giving the appearance of a black blob. Don’t worry, the text is in thereeven though you cannot read it; you can cut and paste it to a text fileto double check. Let’s take a moment to make sure we understand how trial-contents.pyis able to use the functions stored in obo.py. The stripTags functionthat we saved to obo.py requires one argument. In other words, to runproperly it needs one piece of information to be supplied. Recall thetrained dog example from a previous lesson. In order to bark, the dogneeds two things: air and a delicious treat. The stripTags function inobo.py needs one thing: a string called pageContents. But you’llnotice that when we call stripTags in the final program(trialcontents.py) there’s no mention of “pageContents“. Instead thefunction is given HTML as an argument. This can be confusing to manypeople when they first start programming. Once a function has beendeclared, we no longer need to use the same variable name when we callthe function. As long as we provide the right type of argument,everything should work fine, no matter what we call it. In this case wewanted pageContents to use the contents of our HTML variable. You couldhave passed it any string, including one you input directly between theparentheses. Try rerunning trial-content.py, changing the stripTagsargument to “I am quite fond of dogs” and see what happens. Note thatdepending on how you define your function (and what it does) yourargument may need to be something other than a string: an integer forexample. Suggested Reading Lutz, Learning Python Ch. 7: Strings Ch. 8: Lists and Dictionaries Ch. 10: Introducing Python Statements Ch. 15: Function Basics Code Syncing To follow along with future lessons it is important that you have the right files and programs in your programming-historian directory. At the end of each chapter you can download the programming-historian zip file to make sure you have the correct code. Note we have removed unneeded files from earlier lessons. Your directory may contain more files and that’s ok! programming-historian-2 (zip)
degolarson RESOLU connexion ok mais ouverture pages web lente ou imopssible bonjour j'ai déjà posté ici : http://forum.ubuntu-fr.org/viewtopic.php?id=21092 mais le pb était peut être un peu différent quand je tape une recherche dans Google ou bien dans ma barre personnelle d'onglets l'ouverture de la page est souvent lente, ou bien je tombe sur ce message : firefox n' pas pu ouvrir .. alors je stoppe la recherche, ou bien dans le 2ème cas je ferme la fenêtre et je retente et là ça marche quasi à tous les coups je ne parle pas bien sur de l'erreur 404 not found bizarre non ? merci Dernière modification par degolarson (Le 06/12/2012, à 21:09) xp / Xubuntu 12.04.5 32b 3.2.0-67-generic xfce 4.8.6 Laptop AsusPRO79 I 250Go / 2.9Go 2 x Pentium (R) Dual-Core T4200 2GHz année 2009 Cartes : IntelMobile4SeriesGMA500, wifi Atheros 9285 Freebox V5 Doucement, nous sommes pressés ! Hors ligne sechanbask Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible en ligne de commande lorsque tu lances un ping google.fr il faut attendre quelques secondes puis faire CTL+C. Qu'est-ce que ça donne ? Dernière modification par sechanbask (Le 07/10/2012, à 21:36) Hors ligne degolarson Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible voilà je n'ai copiè que quelques lignes il y en a des dizaines et ça change sans arrêt ! s 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=82 ttl=48 time=49.0 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=83 ttl=48 time=39.8 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=84 ttl=48 time=1073 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=85 ttl=48 time=86.0 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=86 ttl=48 time=455 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=87 ttl=48 time=140 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=88 ttl=48 time=584 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=89 ttl=48 time=63.2 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=90 ttl=48 time=67.2 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=91 ttl=48 time=52.8 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=92 ttl=48 time=492 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=93 ttl=48 time=58.7 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=94 ttl=48 time=37.6 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=96 ttl=48 time=1466 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=97 ttl=48 time=458 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=98 ttl=48 time=234 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=99 ttl=48 time=44.4 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=100 ttl=48 time=37.6 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=101 ttl=48 time=37.0 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=102 ttl=48 time=45.4 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=103 ttl=48 time=37.9 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=104 ttl=48 time=37.4 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=105 ttl=48 time=37.3 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=106 ttl=48 time=39.1 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=107 ttl=48 time=36.5 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=108 ttl=48 time=37.7 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=109 ttl=48 time=36.6 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=110 ttl=48 time=36.7 ms 64 bytes from wb-in-f94.1e100.net (74.125.132.94): icmp_req=111 ttl=48 time=240 ms 64 byte xp / Xubuntu 12.04.5 32b 3.2.0-67-generic xfce 4.8.6 Laptop AsusPRO79 I 250Go / 2.9Go 2 x Pentium (R) Dual-Core T4200 2GHz année 2009 Cartes : IntelMobile4SeriesGMA500, wifi Atheros 9285 Freebox V5 Doucement, nous sommes pressés ! Hors ligne xavier4811 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible Bonsoir, réseau très instable non ? wifi ? Ça fait quand même de vrai écarts. 1466107358449245845524023414086.067.263.258.752.849.045.444.439.839.137.937.737.637.637.437.337.036.736.636.5 Hors ligne degolarson Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible oui je suis en wifi , faudrait-il faire un test en liaison ethernet ? depuis hier et ausssi la semaine dernière il y a des déconnexions de quelques secondes voire 1 minute, mais le pb date d'avant ceci dit j'ai l'impression qu'il est plus fréquent ce soir xp / Xubuntu 12.04.5 32b 3.2.0-67-generic xfce 4.8.6 Laptop AsusPRO79 I 250Go / 2.9Go 2 x Pentium (R) Dual-Core T4200 2GHz année 2009 Cartes : IntelMobile4SeriesGMA500, wifi Atheros 9285 Freebox V5 Doucement, nous sommes pressés ! Hors ligne xavier4811 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible regarde autour de toi si il y a de nombreux réseaux. sudo iwlist scan s'ils sont tous sur le même canal, c'est possible que ça interfère, surtout le soir. fait un essai en filaire, si c'est beaucoup mieux change le canal wifi de ta box|routeur ------------------------ EDIT sudo, pas sud, faute de frappe Dernière modification par xavier4811 (Le 07/10/2012, à 22:22) Hors ligne Ubuntu1988 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible Salut Un début d'indice ? (vu ta signature) J'ai perdu ! :( Hors ligne xavier4811 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible Si c'est le cas : Solution Hors ligne degolarson Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible pour xavier : wlan0 Scan completed : Cell 01 - Address: AE:8E:87:99:B4:00 Channel:11 Frequency:2.462 GHz (Channel 11) Quality=61/70 Signal level=-49 dBm Encryption key:on ESSID:"degolarson" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 12 Mb/s; 24 Mb/s; 48 Mb/s Mode:Master Extra:tsf=0000000aad24616f Extra: Last beacon: 564ms ago IE: Unknown: 000A6465676F6C6172736F6E IE: Unknown: 010882848B961224486C IE: Unknown: 03010B IE: Unknown: 2A0104 IE: Unknown: 32040C183060 IE: Unknown: 2D1A6E1017FFFF000001000000000000000000000000000000000000 IE: Unknown: 3D160B070500000000000000000000000000000000000000 IE: Unknown: 3E0100 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : TKIP CCMP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 IE: Unknown: 7F0101 IE: Unknown: DD07000C4300000000 IE: Unknown: DD1E00904C336E1017FFFF000001000000000000000000000000000000000000 IE: Unknown: DD1A00904C340B070500000000000000000000000000000000000000 Cell 02 - Address: 00:17:33:2D:3D:58 Channel:1 Frequency:2.412 GHz (Channel 1) Quality=50/70 Signal level=-60 dBm Encryption key:on ESSID:"NEUF_3D54" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=000006de53c351f4 Extra: Last beacon: 1396ms ago IE: Unknown: 00094E4555465F33443534 IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 2A0104 IE: Unknown: 2F0104 IE: Unknown: 32040C121860 IE: Unknown: DD090010180201F0040000 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 Cell 03 - Address: 00:1F:9F:C0:78:CD Channel:6 Frequency:2.437 GHz (Channel 6) Quality=49/70 Signal level=-61 dBm Encryption key:on ESSID:"Bbox-2D3A38" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=000000d4196416ff Extra: Last beacon: 852ms ago IE: Unknown: 000B42626F782D324433413338 IE: Unknown: 010882848B962430486C IE: Unknown: 030106 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: 32040C121860 IE: Unknown: DD09001018020100040000 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (1) : TKIP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101080003A4000027A4000042435E0062322F00 Cell 04 - Address: 02:1F:9F:C0:78:CE Channel:6 Frequency:2.437 GHz (Channel 6) Quality=49/70 Signal level=-61 dBm Encryption key:off ESSID:"Bouygues Telecom Wi-Fi" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=000000d4196410f1 Extra: Last beacon: 856ms ago IE: Unknown: 0016426F7579677565732054656C65636F6D2057692D4669 IE: Unknown: 010882848B962430486C IE: Unknown: 030106 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: Unknown: 32040C121860 IE: Unknown: DD09001018020100040000 IE: Unknown: DD180050F2020101080003A4000027A4000042435E0062322F00 Cell 05 - Address: 06:19:70:42:CC:81 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=36/70 Signal level=-74 dBm Encryption key:off ESSID:"orange" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000001f6a3400e Extra: Last beacon: 848ms ago IE: Unknown: 00066F72616E6765 IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101840003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D1606001B00000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010020000000 IE: Unknown: 0706465220010D14 Cell 06 - Address: 00:19:70:42:CC:81 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=41/70 Signal level=-69 dBm Encryption key:on ESSID:"Livebox-6330" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000001f6a3f9e7 Extra: Last beacon: 848ms ago IE: Unknown: 000C4C697665626F782D36333330 IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101840003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D1606001B00000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010020000000 IE: Unknown: 0706465220010D14 IE: Unknown: DD880050F204104A000110104400010210570001001041000100103B00010310470010C8CD72866330C8CD728663307286633010210005536167656D102300084C697665626F7832102400084C697665626F78321042000A333231303332314450341054000800060050F20400011011000D4C697665626F78322D36333330100800020086103C000101 Cell 07 - Address: 4C:AC:0A:1F:D5:3C Channel:11 Frequency:2.462 GHz (Channel 11) Quality=31/70 Signal level=-79 dBm Encryption key:on ESSID:"Livebox-Sparda" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000006f9c44180 Extra: Last beacon: 532ms ago IE: Unknown: 000E4C697665626F782D537061726461 IE: Unknown: 010882848B960C121824 IE: Unknown: 03010B IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F20201018B0003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D160B000900000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000 IE: Unknown: 0706465220010D14 IE: Unknown: DD920050F204104A0001101044000102103B00010310470010000000000000100000004CAC0A1FD53C102100035A54451023000F4C697665626F7820465454482076321024001A6164363833365F4144534C5F5048595F30783030303030303030104200074C4D5A313130381054000800060050F20400011011000D465446525341332E363332313110080002008A103C000101 Cell 08 - Address: 18:F4:6A:95:8F:12 Channel:11 Frequency:2.462 GHz (Channel 11) Quality=27/70 Signal level=-83 dBm Encryption key:on ESSID:"NUMERICABLE-A061" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=0000019ab25c5007 Extra: Last beacon: 484ms ago IE: Unknown: 00104E554D4552494341424C452D41303631 IE: Unknown: 010882848B962430486C IE: Unknown: 03010B IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: Unknown: 32040C121860 IE: Unknown: DD090010180202F0000000 IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00 Cell 09 - Address: AE:8E:87:99:B4:01 Channel:11 Frequency:2.462 GHz (Channel 11) Quality=59/70 Signal level=-51 dBm Encryption key:off ESSID:"FreeWifi" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 12 Mb/s; 24 Mb/s; 48 Mb/s Mode:Master Extra:tsf=0000000aad246b28 Extra: Last beacon: 560ms ago IE: Unknown: 00084672656557696669 IE: Unknown: 010882848B961224486C IE: Unknown: 03010B IE: Unknown: 2A0104 IE: Unknown: 32040C183060 IE: Unknown: 2D1A6E1017FFFF000001000000000000000000000000000000000000 IE: Unknown: 3D160B070500000000000000000000000000000000000000 IE: Unknown: 3E0100 IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 IE: Unknown: 7F0101 IE: Unknown: DD07000C4300000000 IE: Unknown: DD1E00904C336E1017FFFF000001000000000000000000000000000000000000 IE: Unknown: DD1A00904C340B070500000000000000000000000000000000000000 Cell 10 - Address: AE:8E:87:99:B4:02 Channel:11 Frequency:2.462 GHz (Channel 11) Quality=59/70 Signal level=-51 dBm Encryption key:on ESSID:"FreeWifi_secure" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s je suis allé voir l'article sur le pb Free-Google, c'est en effet embêtant pour Duckduck , Firefox dit qu'il faut des plugins , dois je les télécharger ? xp / Xubuntu 12.04.5 32b 3.2.0-67-generic xfce 4.8.6 Laptop AsusPRO79 I 250Go / 2.9Go 2 x Pentium (R) Dual-Core T4200 2GHz année 2009 Cartes : IntelMobile4SeriesGMA500, wifi Atheros 9285 Freebox V5 Doucement, nous sommes pressés ! Hors ligne Ubuntu1988 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible Ou changer de FAI ... J'ai perdu ! :( Hors ligne xavier4811 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible Comme d'hab, presque tout le monde est sur le canal 11, quelques échappés sur le 6 mais c'est pas trop envahi encore. 11 2.462 GHz (Channel 11) 1 2.412 GHz (Channel 1) 6 2.437 GHz (Channel 6) 6 2.437 GHz (Channel 6) 6 2.437 GHz (Channel 6) 6 2.437 GHz (Channel 6) 11 2.462 GHz (Channel 11) 11 2.462 GHz (Channel 11) 11 2.462 GHz (Channel 11) 11 2.462 GHz (Channel 11) pour DDG, les plugins sont pas nécessaires pour afficher les pages. à moins que tu veuille ajouter une extension a FF @Ubuntu1988 : Je sent comme un je ne sais quoi contre Free non ? Hors ligne Ubuntu1988 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible @Ubuntu1988 : Je sent comme un je ne sais quoi contre Free non ? Disons que la gueguerre Google / Free sur qui paye les tuyaux, ça commence à gonfler et comme toujours, c'est nous qui sommes lésés au final J'ai perdu ! :( Hors ligne xavier4811 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible Disons que la gueguerre Google / Free sur qui paye les tuyaux, ça commence à gonfler et comme toujours, c'est nous qui sommes lésés au final Entièrement d'accord. C'est malheureusement toujours les mêmes qui trinquent. Hors ligne degolarson Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible bonjour faudrait il passer sur canal 6 ou un autre pour éviter l'encombrement du 11 ? xp / Xubuntu 12.04.5 32b 3.2.0-67-generic xfce 4.8.6 Laptop AsusPRO79 I 250Go / 2.9Go 2 x Pentium (R) Dual-Core T4200 2GHz année 2009 Cartes : IntelMobile4SeriesGMA500, wifi Atheros 9285 Freebox V5 Doucement, nous sommes pressés ! Hors ligne xavier4811 Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible N'importe lequel pas trop utilisé (voire pas du tout) et si possible en dessous de 11 avec pas mal de chips Broadcom. Je dis pas que tu verras un miracle mais au moins ça peut pas dégrader. Hors ligne degolarson Re : RESOLU connexion ok mais ouverture pages web lente ou imopssible bonjour comme ça recommençait depuis hier à très mal connecter, j'ai testé sous XP et là aucun problème (il rend service, des fois, old Bill..) revenu sous xubuntu le filaire marche aussi , j'ai vu avec la commande sudo iwlist scan que j'utilisais le canal 1 comme plusieurs autres réseaux de mes voisins d'immeuble pourtant j'avais choisi le canal 5 sur l'interface de gestion de la freebox explication : j'avais laissé "choix automatique du canal" alors je viens de rechoisir canal 5 et désactiver choix auto ça a l'air de marcher beaucoup mieux, à suivre Dernière modification par degolarson (Le 09/11/2012, à 14:01) xp / Xubuntu 12.04.5 32b 3.2.0-67-generic xfce 4.8.6 Laptop AsusPRO79 I 250Go / 2.9Go 2 x Pentium (R) Dual-Core T4200 2GHz année 2009 Cartes : IntelMobile4SeriesGMA500, wifi Atheros 9285 Freebox V5 Doucement, nous sommes pressés ! Hors ligne
I try to explain here what I am trying to do: I have one shapefile and one independent dbf table with the same fields. In the dbf table all the fields are populated but in the shapefile attribute table just one, lets name it "OneField". What I want to do is to check that the values from "OneField"(Shapefile) are the same as the values in "OneField"(dbf table) and if so, to populate the remaining empty fields in the shapefile attribute table with the ones in the independent dbf table. At the moment I am trying just to copy the values from the independent dbf table to the shapefile attribute table but I am stuck.(when I run this code I get a message that pythonwin stopped working and nothing happens to the tables). Can you give me a hand please? Here is the code: import arcpy table = "link/to/table.dbf" fc = "link/to/shapefile.shp" # Create a search cursor rowsTable = arcpy.SearchCursor(table) # Create an update cursor rowsFc = arcpy.UpdateCursor(fc) for row in rowsTable: row = row.getValue("OneField") valueTable = row for row in rowsFc: row = row.setValue("OneField", valueTable) rowsFc.updateRow(row) row = rowsFc.next() row = rowsTable.next() del row, rowsFc, rowsTable Thank you very much
After several months thinking about it, and just two requests, I finally decidedto publish satyr's code. I decided to use github because I already switched tosatyr from hg to git, mainly for testing and understanding it. I think Ican live with hg, althought branch management in git seems to be given morethought and a good implementation. So, without further ado: satyr in github Remember, it's still a test software, by no means polished or ready for humanconsumption, and with very low development force. Still, I think it has somenice features, like interchangeable skins and a well defined backend, D-Bussupport, quick tag edition, reasonable collection managment, and, thanks toPhonon, almost-gapless playback and things like «stop after playing thecurrent this file» (but not «after any given file» yet). In Debian Sid it mostly works only with the GStreamer backend; I haven't triedthe xine one and I know VLC does not emit a signal needed for queueing thenext song, so you have to press «next» after each song. AFAIK this is fixedupstream. satyr-0.3.2 "I should install my own food" is out. The Changelog is not very impressive: We can save the list of queued Songs on exit and load it at startup. The queue position is not presented when editing the title of a Song that is queued. Setting the track number works now (was a bug). Fixed a bug in setup.py. but the last one is somewhat important (Thanks Chipaca). Also, 2 months ago, I made the satyr-0.3.1 "freudian slip" release, when user 'dglent' from http://kde-apps.org/ found a packaging bug. It was not only a bugfixing revision, it also included new features: nowPlaying(), exported via dbus. 'now playing' plugin for irssi. Changes in tags is copied to selected cells (in the same column). This allows 'Massive tag writing'. Fixed "NoneType has no attribute 'row'" bug, but this is just a workaround. Forgot to install complex.ui. Now go get it! I figured several things after the last/first release. One of those is that one can't try to pull a beta of your first releases. Betas are for well stablished pieces of code which are supposed to be rock solid; initial releases not. Another thing I figured out (or actually remembered) is that old saying: release early, release often. So instead of a 0.1 'official' release, where all the bugs are nailed down intheir coffins and everything is as peachy and rock solid as a peachy huge rock(like the Mount Fitz Roy[1],for instance), and only 13 days later than the initial release, we get anothermessy release: satyr-0.2, codenamed "I love when two branches cometogether", is out. This time we got that pharaonic refactoring I mentioned in the last release, which means that skins are very independient from the rest of the app, which is good for skins developers and the core developers, even if those sets are equal and only contain me. From the user point of view, the complex skin is nicer to see (column widths and headers, OMG!) and it also allows tag editing. Yes, because we have tag editing! Right now the only way to fire the edition is to start typing, which will erase previous data, but don't worry, I plan to nail that soon. At least it's usefull for filling up new tags. I also fixed the bug which prevented this skin to highlight which is being played. Lastly but not leastly, the complex skin has a seek bar, and the code got tons of cleanups. So, that's it. It's out, go grab it! [1] Right now I would consider satyr just a small peeble in a highway, onlynoticeable if some huge truck picks it up with its wheels and throws it to yourwindshield. But I plan to reach at least to be a sizable rock such as that onefound near one of the Vikings in Mars. Since a long ago I'm looking for a media player for listening my music. In that aspect I'm really exigent. Is not that I need lots of plugins and eye candy, no, I just need a media player that fits my way to listen music. How do I listen to music? All day long, for starters. I have a collection of.ogg files, which I normally listen in random mode. From time to time Iwant to listen certain song, so I either queue it or simply stop the currentone and start the one I want. Sometimes I enqueue several songs, which mightnot be related between them (maybe only in my head they are). I've been using Amarok, I really like its random albums feature; that is,I listen to a whole album, and when it finishes, another album is picked atrandom and I listen to all its songs. The last feature, a really importantone: My collection is my playlist and viceversa. I don't build playlists; ifI want to listen to certain songs I just queue them. One feature I like alsois a tag editor and the posibility to rearrange the songs based on its tags(with support for albums with songs from various authors, like OST's). Lastbut no least, reacting to new files in the collection is also well regarded. I used to use xmms. I still think it's a very good player for me, but itlacks utf support and doesn't react when I add songs to the collection. ThenI used Amarok, Juk, QuodLibet, Audacious (I was using it up totoday) and probably a couple more. None of them support all the features, sotoday, completely tired of this situation, I started writing my own. Icalled it Satyr. Another reason to do it is to play a little more withPyKDE. Talking about Python and KDE, I know the existence ofminirok, but it uses GStreamer, and I wanted to play with Phonon. So, what's different in this media player? If you think about it, if you have a CD (vinyl, cassettes maybe?) collection in your home, what you have is exactly that: a collection of Albums. Most media players manage Songs, grouping them in Albums and by Author too. Notably Amarok used to manage Albums with several artists (what's called a 'Various artists' Album), but since Amarok 2 it doesn't do it anymore, nor the queuing works. So the basic idea is exactly that: you have a Collection of Albums, most of them with songs from the same Author (and sometimes Featuring some other Authors[1]), but sometimes with Songs from different Authors. Of course I will try to implement all the features I mentioned above. Ok, enough introduction. This post was aimed to show some code, and that's what I'm going to do now. This afternoon I was testing the Phonon Python bindings, trying to make a script to play a simple file. This snippet works: # qt/kde related from <span class="createlink">PyKDE4</span>.kdecore import KCmdLineArgs, KAboutData, i18n, ki18n from <span class="createlink">PyKDE4</span>.kdeui import KApplication # from <span class="createlink">PyKDE4</span>.phonon import Phonon from <span class="createlink">PyQt4</span>.phonon import Phonon from <span class="createlink">PyQt4</span>.QtCore import SIGNAL media= Phonon.MediaObject () ao= Phonon.AudioOutput (Phonon.MusicCategory, app) print ao.outputDevice ().name () Phonon.createPath (media, ao) media.setCurrentSource (Phonon.MediaSource ("/home/mdione/test.ogg") media.play () app.connect (media, SIGNAL("finished ()"), app.quit) app.exec_ () Of course, this must be preceded by the bureaucratic creation of aKApplication, but it basically plays an ogg file and quits. You just haveto define a MediaObject as the source, an AudioOutput as the sink, andthen you createPath between them. As you can see, with Phonon you don'teven have to worry about where the output will be going: that is defined bythe system/user configuration. You only have to declare that yourAudioOutput is going to play Music (the second actual line of code). There are a couple of peculiarities with the Python bindings. First ofall, Phonon comes both with Qt and separately. The separate one has abinding in the PyKDE4 package, but it seems that it doesn't work verywell, so I used the PyQt binding. For that, I had to install thepython-pyqt4-phonon package. Second, the bindings don't support to callsetCurrentSource() with a string; you have to wrap it in a MediaSource.The original API supports it. Third, it seems that Phonon.createPlayer()is not supported by the bindings either, so I had to build the AudioOutputby hand. I don't care, it's just a couple lines more. This code also shows the name of the selected OutputDevice. I my machineit shows HDA Intel (STAC92xx Analog). In the following days I'll be posting more info about what comes out of this project. I will only reveal that right now the code has classes called Player, and Collection. It can scan a Collection from a path given in the command line and play all the files found there. Soon more features will come. [1] I'm not planing to do anything about it... yet. satyr got the possibility to queue songs for playing very early. At thatmoment there wasn't any GUI, so the only way to (de)queue songs was viadbus[1]. Once the GUI was there, we had to provide a way to queue songs. Assatyr aims to be fully usable only using the keyboard, the obvious way was tosetup some shortcut for the action. Qt and then KDE have a very nice API for defining 'actions' that can befired by the user. The ways to fire them include a shortcut, a menu entry or abutton in a toolbar. I decided to go with the KDE version, KAction. According to [thedocumentation](http://api.kde.org/4.x-api/kdelibs-apidocs/kdeui/html/classKAction.html#_details), to create an action is just a matter of creatingthe action and to add it to an actionCollection(). The problem is that nowhereit says where this collection comes from. There's the KActionCollection class,but creating one and adding the actions to it seems to be not enough. If you instead follow thetutorial you'llsee that it refers to a KXmlGuiWindow, which I revised when I was desperatelylooking for the misterious actionCollection(). I don't know why, but thedocumentation generated by PyKDE does not include that method. All in all, thetutorial code works, so I just ported my MainWindow to KXmlGuiWindow: class MainWindow (KXmlGuiWindow): def __init__ (self, parent=None): KXmlGuiWindow.__init__ (self, parent) uipath= __file__[:__file__.rfind ('.')]+'.ui' UIMainWindow, _= uic.loadUiType (uipath) self.ui= UIMainWindow () self.ui.setupUi (self) [...] self.ac= KActionCollection (self) actions.create (self, self.actionCollection ()) and actions.create() is: def create (parent, ac): '''here be common actions for satyr skins''' queueAction= KAction (parent) queueAction.setShortcut (Qt.CTRL + Qt.Key_Q) ac.addAction ("queue", queueAction) # FIXME? if we don't do this then the skin will be able to choose whether it # wants the action or not. with this it will still be able to do it, but via # implementing empty slots queueAction.triggered.connect (parent.queue) Very simple, really. But then the action doesn't work! I tried different approaches, but none worked. The tutorial I mentioned talks about some capabilities of the KXmlGuiWindow;one of them, the ability to have the inclusion of actions in menues andtoolbars dumped and loaded from XML file (hence, the XML part of the classname), and that is handled by the setupGUI() method. From its documentation:«Configures the current windows and its actions in the typical KDE fashion.[...] Typically this function replaces createGUI()». In my case the GUI isalready built by that self.ui.setupUi() that you see up there, so I ignoredthis method. But the thing is that if you don't call this method, the actionswill not be properly hooked, so they don't work; hence, my problem. But justcalling it make the actions work! I'll check later what's the magic behind thismethod. For now just adding self.setupGUI() at the end of the __init__()was enough. So, that's it with actions. As a result I also get several things: a populatedmenu bar with Settings and Help options (but no File; that'll come laterwith standard actions, of which I'll talk about later, I think), with freereport bug, about satyr and configure shortcuts options, among others. Thelater does work but its state is not saved. That also will come in the same postthat standard actions. PD: First post from my new internet connection. Will satyr suffer from this?Only time will tell... [1] then you had to pick the index of the songs either by guessing or looking at the dump of the playlist. In my lastpostI said «The next step is to make my Player class to export its methodsvia DBus and that's it!». Well, tell you what: is not that easy. If youtry to inherit from QObject and dbus.service.Object you get this error: In [3]: class Klass (QtCore.QObject, dbus.service.Object): pass <span class="createlink">TypeError</span>: Error when calling the metaclass bases metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases This occurs when both ancestors have their own metaclasses. UnluckilyPython doesn't resolve it for you. The answer is to create a intermediatemetaclass which inherits from both metaclasses (which we can obtain withtype()) and make it the metaclass of our class. In code: class <span class="createlink">MetaPlayer</span> (type (QObject), type (dbus.service.Object)): """Dummy metaclass that allows us to inherit from both QObject and d.s.Object""" pass class Player (QObject, dbus.service.Object): __metaclass__= <span class="createlink">MetaPlayer</span> [...] Is that it now? Can I go and do my code? Unfortunately no. See this: qdbus org.kde.satyr / /player Cannot introspect object /player at org.kde.satyr: org.freedesktop.DBus.Python.KeyError (Traceback (most recent call last): File "/usr/lib/pymodules/python2.5/dbus/service.py", line 702, in _message_cb retval = candidate_method(self, *args, **keywords) File "/usr/lib/pymodules/python2.5/dbus/service.py", line 759, in Introspect interfaces = self._dbus_class_table[self.__class__.__module__ + '.' + self.__class__.__name__] <span class="createlink">KeyError</span>: '__main__.Player' ) This is the class dbus.service.Object complaining something else. It'sgetting late here and I'm tired, so I'll continue tomorrow. I must be crazy or something. Inestead of spending my last friday in Francefor this year partying in some bar I satyed home and produced anothersatyr release. The ? Here, have some: (De)Queueing Songs with the keyboard, with visual feedback. Remembers window size. Current song highlighted not with selection but with real color changes. We can select Songs. Right now useful only for queuing several songs at the same time. Workaround a bug in with the . Show the filepaths as much as possible in the user's encoding. Hitting F2 in a cell edits its contents. Silightly cleaner interface: don't show so many 0's. bug with perv/next: weren't wrapping around. Go get it from the Download area! I promise to party double tomorrorw... More than two months ago I globed about QStrings andpaths.The problem was this: my app accepts paths via command line, which areprocessed via KCmdLineOptions; which in turn converts everything toQStrings. What I wanted were paths, which are more like QByteArrays, notQStrings (because the latter have internally an unicode representation;more on that later). Including PyQt4 in the equation forced me to resortto QByteArray to get the path as a str instead of usingQString.constData() (PyQt4 doesn't export that function). But that'sonly the beginning of the problem. Take for instance this situation. I have a music collection that I've beenbuilding for years now (more that 10, I think). In the old times of thiscollection the filenames were encoded in iso-8859-1. Then the future cameand converted all my machines to utf-8. But only the software; thefilesystems were in one way or another inherited from system to system, frommachine to machine. So I ended with a mixture of utf and iso filenames, tothe point where I have a file whose filename is in iso, but the directorywhere it is is in utf. Yes, I know, it is a mess. But if I take any decentmedia player, I can play the file allright. That's because the filesystemknows nothing of encodings (otherwise it would reject badly encodedfilenames). I just spent last saturday making sure that satyr only stored filepaths instrs, not unicodes or QStrings. It took concentration, but having justa bunch of classes and only 3 or 4 points where the filepaths are managed itwasn't that difficult. Still, it took a day. But then, as I mentioned inthat post, Phonon the is not able to play such files... or so I thought. If you run satyr after executing export PHONON_XINE_DEBUG=1 you'll see alot of Phonon debug info in the console (not that there is another way torun satyr right now anyways). Among all that info you'll see lines such asthese two: void Phonon::Xine::XineStream::setMrl(const QByteArray&, Phonon::Xine::XineStream::StateForNewMrl) ... bool Phonon::Xine::XineStream::xineOpen(Phonon::State) xine_open succeeded for m_mrl = ... If you're sharp enough (I'm not; sandsmark from #phonon had to tell me)you'll note the mention of MRL's. MRL's are xine's URL for media. As anyURL, they can (and most of the time must) encode 'strange' characters withthe so-called "percent encoding". This means that no matter what encodingsthe different parts of a filepath is in, I just add file:// at thebeginning and then I can safely encode it scaping non-ascii characters to%xx representations... or that's what the theory says. One thing to note isthat the file:// part must not be scaped; xine complains that the filedoes not exist in that case. Looking for help in Qt's classes one can find QUrl and the already knownQByteArray. I can call QByteArray.toPercentEnconding() from my str andfeed that to QUrl.fromPercentEncoding() (which strangely returns aQString, which is exactly what we're avoiding) or QUrl.fromEncoded().But then the first function encodes too much, replacing :// with%3A%2F%2F. No fun. Ok, let's try creating a QByteArray with only the file:// and thenappend() the toPercentEncoding() of the path only. It works: <span class="createlink">PyQt4</span>.QtCore.QByteArray('file://%2Fhome%2Fmdione...%2F%C3%9Altimo%20bondi%20a%20Finisterre%2F07-%20La%20peque%F1a%20novia%20del%20carioca.wav') But then calling QUrl.fromEncoded() gives: <span class="createlink">PyQt4</span>.QtCore.QUrl("file://xn--/home/mdione.../ltimo bondi a finisterre/07- la pequea novia del carioca-wkmz60758d.wav") The URL got somehow puny-encoded,which of course xine doesn't recognize for local files. Another option is to create an empty QUrl, call setEncodedUrl() withthe to QUrl.StrictMode so we avoid 50 lines of code that starthere[1]that try to escape everything all over again (and I already had somedouble-or-even-triple-enconding nightmares parsing RSS/Atom feeds last year,thank you), but we get puny-encoded again (maybe it is 'pwny-encoded'?). Last resort: backtrack to the point were we created only one QByteArraywith the path and call toPercentEncoding(); feed that to the methodsetEncodedPath() of an empty QUrl. Then we add the last piece callingsetScheme('file') and we're ready! Of course we're not: <span class="createlink">PyQt4</span>.QtCore.QByteArray('file:%2Fhome%2Fmdione...%2F%C3%9Altimo%20bondi%20a%20Finisterre%2F07-%20La%20peque%F1a%20novia%20del%20carioca.wav') Notice the lack of the two // after file:? xine doesn't like it;hence, I don't either. Ok, this post got too long. I hope I can resolve this soon, I already spent too much time on it. At least a good part of it was expaining it, so others don't have to suffer the same as I did. BTW, satyr will shortly be released, whether I fix this bug or not. [1] Look at the size of that file! 6k lines to handle URL's! Who would say it was so difficult... Once more I'm remembered of how lucky I am to have this libraries at the tips of my fingers, yay! Update before even publishing: most of the numbers in the initial writing werealmost doubled. The problem was that distutils left a build directory when Itried either to install or to package satyr, I don't remember which, so thefiles found by the find commands below were mostly duplicated. I had to removethe directory and run all the commands again! I wanted to know some things about satyr's code, in particular some statisticsabout its lines of code. A first approach: $ find . -name '*.py' -o -name '*.ui' | xargs wc -l | grep total 2397 total Hmm, that's a lot, I don't remember wrinting so many lines. Beh, the comments, let's take them out: $ find . -name '*.py' -o -name '*.ui' | xargs egrep -v '^#' | wc -l2136 What about empty lines?: $ find . -name '*.py' -o -name '*.ui' | xargs egrep -v '^(#.*| *)$' | wc -l1764 Meeh, I didn't take out all the comment lines, only those lines startingwith #, which are mainly the license lines on each source file. I have to alsocount the comments in the middle of the code: $ find . -name '*.py' -o -name '*.ui' | xargs egrep -v '^( *#.*| *)$' | wc -l1475 And how much of those lines are actual code and not from some xml file describing user interface?: $ find . -name '*.py' | xargs egrep -v '^( *#.*| *)$' | wc -l1124 How much code means its 3 current skins?: $ find satyr/skins/ -name '*.py' | xargs egrep -v '^( *#.*| *)$' | wc -l 341 How much in the most complex one? $ egrep -v '^( *#.*| *)$' satyr/skins/complex.py | wc -l 182 All this numbers tell something: ~300 empty lines means that my code is not very tight. I already knew this: I like to break functions in secuential blocks of code, each one accomplishing a somehow atomic step towards the problem the function tries to solve. Almost 300 comment lines means my code is very well commented, even if a sixth of those comments are BUGs, TODOs, FIXMEs or HINTs: $ find . -name '*.py' | xargs egrep '^ *# (TODO|BUG|FIXME|HINT)' | wc -l56 Wow, more than I thought. Now, 1100 lines of actual code for all that satyraccomplishes today (playing almost any existing audio file format, progressivecollection scanning, lazy tag reading and also tag writing, 3 skins[4], handleseveral Collections, searching, random, stop after, saving some state, pickingup new songs added to the collection via the filesystem, queuing songs[1], dbusinterface[2], handle any encodings in filenames[3]... phew! and some minorfeatures more!) I think is quite impressive. Of course, doing all that in athousandsomething lines of code would be nearlyimpossible without PyQt4/Qt4, PyKDE4/KDE (something) 4[7], Tagpy/Tagliband finally Python itself. It's really nice to have such a nice framework towork with, really. [1] No user interface for this yet; shame on me. [2] ... which toguether with kalarm and qdbus make my alarm clock. [3] Almost all the support is in Phonon itself. [4] [5] Less than 800 if we don't count skins. [5] Yes, I add more footnotes as I readproof the posts :) [6] I skipped [6] :) [7] After the rebranding I don't know which is the proper name for the libraries, because I'm writing this post while very much offline, and TheDot does not publish the whole articles via rss, which I hate. One of the things I had to while developing satyr is building a model for aQListViewer. It should be straighforward from qt's documentation, but I founda couple of things that I would like to put in a post, specially because theredoesn't seem to be much models in PyQt4 easily found in the web. According to its description, a subclass of QAbstractListModel as this oneshould mostly implement the data() and rowCount() methods, which is true.This example creates a read-only model, so no need to implement setData(),but given the simplicity of data(), it doesn't seem too difficult to do. Ialso wanted it to react when more Songs were added on the fly[1]. The method data() is the most important one. It is not only used forretrieving the data itself, but also some metadata useful for showing the data,like icons and other stuff. For selecting what the caller wants, it refers aQt.ItemDataRole. The role for the data itself is Qt.DisplayRole. One of theparticularities of this method is that it could be called with any vegetable asinput; namely, it can refer to a row that does not exist anymore or for metadatathat you don't care about. In those cases you must return an empty QVariant,not None. So, a first implementation is: def data (self, modelIndex, role): if modelIndex.isValid () and modelIndex.row ()<self.count and role==Qt.DisplayRole: # songForIndex() returns the Song corresponding to the row song= self.songForIndex (modelIndex.row ()) # formatSong() returns a QString with the data to show data= QVariant (self.formatSong (song)) else: data= QVariant () return data This method, together with a rowCount() that simply returns self.count, isenough for showing data that is already there. Notice that the QModelIndex canbe not valid, and in this case we only care about its row because we're a list. But then I wanted my QListViewer to show songs progresively as they areloaded/scanned[2] and also as they are found as new. But then a problem arises:the view is like a table of only one column. The width of this colunm at thebegining is the same width as the QListView itself. But what happens when thestring shown is too big? What happens is that it gets chopped. We must informthe view that some of the rows are bigger. That's where the metadata comes intoplay. Another possible role is Qt.SizeHintRole. If we return a size instead of anempty QVariant, that size will be used to expand the column as needed, evengiving us a scrollbar if it's wider that the view. Now, we're supposed to show the tags for the Song (that's what formatSong()does if possible; if not, it simply returns the filepath), so this width shouldbe calculated based on the length of the string that represents the song[3]. Butif we try to read the tags for all the songs as we load the Collection, we endup with too much disk activity before you can show anything to the user, whichis unacceptable[4]. So instead we calculate based on the filepath, which is usedfor Songs with too few tags anyways. Here's the hacky code: ...# FIXME: kinda hackyself.fontMetrics= QFontMetrics (KGlobalSettings.generalFont ())...def data (self, modelIndex, role): if modelIndex.isValid () and modelIndex.row ()<self.count: song= self.songForIndex (modelIndex.row ()) if role==Qt.DisplayRole: data= QVariant (self.formatSong (song)) elif role==Qt.SizeHintRole: # calculate something based on the filepath data= QVariant (self.fontMetrics.size (Qt.TextSingleLine, song.filepath)) else: data= QVariant () else: data= QVariant () return data The last point then is reacting to Songs are added on the fly. This is alsoeasy: you tell the views you're about to insert rows, you insert them, tell theviews you finished, and then emit dataChanged(): def addSong (self): # lastIndex keeps track of the last index used. row= self.lastIndex self.lastIndex+= 1 self.beginInsertRows (QModelIndex (), row, row) # actually the Song has already been added to the Collection[5] # so I don't do anything here, # but if you keep your rows in this model you should do something here self.endInsertRows () self.count+= 1 modelIndex= self.index (row, 0) self.dataChanged.emit (modelIndex, modelIndex) Later I'll post any peculiarities I find porting all this stuff to a read/writeQTableModel. [1] That's material for another post :) [2] This feature can be said to be a little too much. Actually, I get a flicker when scanning. [3] Of course the next step is to use a table view and make a model for it. [4] Right now the load time for a Collection of ~6.5k songs is quite long asit is. [5] This is a design decision which is not relevant to this example.
Download FREE PDF The DZone Refcard #43 is an introduction to system high availability and scalability terminology and techniques (http://refcardz.dzone.com/refcardz/scalability). The next logical step is the scalable handling of massive data volumes resulting from having these powerful processing capabilities. This Refcard demystifies NoSQL and data scalability techniques by introducing some core concepts. It also offers an overview of current technologies available in this domain and suggests how to apply them. Data scalability is the ability of a system to store, manipulate,analyze, and otherwise process ever increasing amounts of data without reducing overall system availability, performance,or throughput. Data scalability is achieved by a combination of more powerful processing capabilities and larger but efficient storage mechanisms. Relational and hierarchical databases scale up by adding more processors, more storage, caching systems, and such. Soon they hit either a cost or a practical scalability limit because they are difficult or impossible to scale out. These database management systems are designed as single units that must maintain data integrity, and enforce schema rules to guarantee it. This rigidity is what promotes upward, but not outward, scalability. Data integrity and schemas are suited for handling transactional, normalized, uniform data. They handle unstructured or rapidly evolving data structures with difficulty or exponentially larger costs. There are two general kinds of architectures used for building scalable data systems: data grids and NoSQL systems. Data grids process workloads defined as independent jobs that don't require data sharing among processes. Storage or network may be shared across all nodes of the grid, but intermediate results have no bearing on other jobs progress or on other nodes in the grid, such as a MapReduce cluster. Figure 1 - Data Grid Data grids expose their functionality through a single API (either a Web service or native to the application programming language) that abstracts its topology and implementation from its data processing consumer. NoSQL describes a horizontally scalable, non-relational database with built-in replication support. Applications interact with it through a simple API, and the data is stored in a "schema-free", flat addressing repository, usually as large files or data blocks. The repository is often a custom file system designed to support NoSQL operations. NoSQL setups are best suited for non-OLTP applications that process massive amounts of structured and unstructured data at a lower cost and with higher efficiency than RDBMs and stored procedures. NoSQL storage is highly replicated (a commit doesn't occur until the data is successfully written to at least two separate storage devices) and the file systems are optimized for writeonly commits. The storage devices are formatted to handle large blocks (32 MB or mor e). Caching and buffering are designed for high I/O throughput. The NoSQL database is implemented as a data grid for processing (mapReduce, queries, CRUD, etc.) This list is the implementation counterpart to the data grid areas of application; the data store feeds the computational network and, together, they form the NoSQL database. NoSQL RDBMS OO Kind Table Class Entity Record Object Attribute Column Property Preparation: It's impossible for a distributed computer system to simultaneously provide all three of these guarantees: Since only two of these characteristics are guaranteed for any given scalable system, use your functional specification and business SLA (service level agreement) to determine what your minimum and target goals for CAP are, pick the two that meet your requirements, and proceed to implement the appropriate technology. Use Figure 3 to identify the best match between your application's CAP requirements and the suggested SQL and NoSQL systems listed. mongoDB is a document-based NoSQL database that bridges the gap between scalable key-value stores like Datastore and Memcache DB, and RDBMS's querying and robustness capabilities. Some of its main features include: mongoDB implements its own file system to optimize I/O throughput by dividing larger objects into smaller chunks. Documents are stored in two separate collections: files containing the object meta-data, and chunks that form a larger document when combined with database accounting information. The mongoDB API provides functions for manipulating files, chunks, and indices directly. The administration tools enable GridFS maintenance. Figure 5 - mongoDB Cluster A mongoDB cluster consists of a master and a slave. The slave may become the master in a fail-over scenario, if necessary. Having a master/slave configuration (also known as Active/Passive or A/P cluster) helps ensure data integrity since only the master is allowed to commit changes to the store at any given time. A commit is successful only if the data is written to GridFS and replicated in the slave mongoDB has a built-in cache that runs directly in the cluster without external requirements. Any query is transparently cached in RAM to expedite data transfer rates and to reduce disk I/O. mongoDB handles documents in BSON format, a binary-encoded JSON representation. BSON is designed to be traversable, lightweight and efficient. Applications can map BSON/JSON documents using native representations like dictionaries, lists, and arrays, leaving the BSON translation to the native mongoDB driver specific to each programming language. { 'name' : 'Tom', 'age' : 42 } Language Representation Python { 'name' : 'Tom', 'age' : 42 } Ruby { "name" => "Tom", "age" => 42 } Java BasicDBObject d; d = new BasicObject(); d.put("name", "Tom"); d.put("age", 42); PHP array( "name" => "Tom", "age" => 42); Dynamic languages offer a closer object mapping to BSON/JSON than compiled languages. The complete BSON specification is available from: http://bsonspec.org/ Programming in mongoDB requires an active server running the mongod and the mongos database daemons (see Figure 5), and a client application that uses one of the languagespecific drivers. Log on to the master server and execute: [servername:user] ./mongod The server will display its status messages to the console unless stdout is redirected elsewhere. This example allocates a database if one doesn't already exist,instantiates a collection on the server, and runs a couple of queries. The mongoDB Developer Manual is available from: #!/usr/bin/env jython import pymongo from pymongo import Connection connection = Connection('servername', 27017) db = connection['people_database'] peopleList = db['people_list'] person = { 'name' : 'Tom', 'age' : 42 } peopleList.insert(person) person = { 'name' : 'Nancy', 'age' : 69 } peopleList.insert(person) # find first entry: person = peopleList.find_one() # find a specific person: person = peopleList.find_one({ 'name' : 'Joe'}) if person is None: print "Joe isn't here!" else: print person['age'] # bulk inserts persons = [{ 'name' : 'Joe' }, {'name' : 'Sue'}] peopleList.insert(persons) # queries with multiple results for person in peopleList.find(): print person['name'] for person in peopleList.find({'age' : {'$ge' : 21}}).sort('name'): print person['name'] # count: nDrinkingAge = peopleList.find({'age' : {'$ge' : 21}}).count() # indexing from pymongo import ASCENDING, DESCENDING peopleList.create_index([('age', DESCENDING), ('name', ASCENDING)]) The PyMongo documentation is available at: http://api.mongodb.org/python - guides for other languages are also available from this web site. The code in the previous example performs these operations: Although mongoDB treats all these data as BSON internally, most of the APIs allow the use of dictionary-style objects to streamline the development process. A successful insertion into the database results in a valid Object ID. This is the unique identifier in the database for a given document. When querying the database, a return value will include this attribute: { "name" : "Tom", "age" : 42, "_id" : ObjectId('999999') } Users may override this Object ID with any ar gument as long as it's unique, or allow mongoDB to assign one automatically. If any of these is part of the functional requirements, a SQL database would be better suited for the application. The GigaSpaces eXtreme Application Platform is a data grid designed to replace traditional application servers. It operates based on an event-processing model where the application dispatches objects to the processing nodes associated with a given data partition. The system may be configured so that data state on the grid may trigger events, or an application may dispatch specific commands imperatively. GigaSpaces XAP also manages all threading and execution aspects of the operation, including thread and connection pools. GigaSpaces XAP implements Spring transactions and auto-recovery. The system detects any failed operations in a computational node and automatically rolls back the transaction; it then places it back in the space where another node picks it up to complete processing. GigaSpaces provides data persistence, distributed processing, and caching by interfacing with SQL and NoSQL data stores. The GigaSpaces API abstracts all back-end operations (job dispatching, data persistence, and caching) and makes it transparent to the application. GigaSpaces XAP may implement distributed computing operations like MapReduce and run them in its nodes, or it may dispatch them for processing to the underlying subsystem if the functionality is available (e.g. mongoDB, Hadoop). The application may implement transactions using the Spring API, the GigaSpaces XPA transactional facilities, or by implementing a workflow where specific NoSQL stores handle entities or groups of entities. This must be implemented explicitly in systems like mongoDB, but may exist in other NoSQL systems like Google App Engine's Datastore. The GigaSpaces XAP API is very rich and covers many aspects beyond NoSQL and data scalability areas like configuration management, deployment, web services, third-party product integration, etc. The GigaSpaces XAP documentation is at: NoSQL operations may be implemented over these APIs: The GigaSpaces XAP API is in a minority of stateful NoSQL systems. Most NoSQL systems strive to achieve statelessness to increase scalability and data consistency. The Datastore is the main scalability feature of Google App Engine applications. It's not a relational database or a facade for one. Datastore is a public API for accessing Google's Bigtable high-performance distributed database system. Think of it as a sparse array distributed across multiple servers that also allows an infinite number of columns and rows. Applications may even define new columns "on the fly" The Datastore scales by adding new servers to a cluster; Google provides this functionality without user participation. Figure 6 - Datastore Architecture Datastore operations are defined around entities. Entities can have one-to-many or many-to-many relationships. The Datastore assigns unique IDs unless the application specifies a unique key. Datastore also disallows some property names that it uses for housekeeping. The complete Datastore documentation is available from: http://code.google.com/appengine/docs/python/datastore/ Datastore supports transactions. These transactions are only effective on entities that belong to the same entity group. Entities in a given group are guaranteed to be stored on the same server. The programming model is based on inheritance of basic entities, db.Model and db.Expando. Persistent data is mapped onto an entity specialization of either of these classes. The API provides persistence and querying instance methods for every entity managed by the Datastore. The Datastore API is simpler than other NoSQL APIs and is highly optimized to work in the App Engine environment. In this example we insert data into the data store, then run a query: from google.appengine.ext import db class Person(db.Model): name = db.StringProperty(required=True) age = db.IntegerProperty(require=True) person = Person(name = 'Tom', age = 42) person.put() person = Person(name = 'Sue', age = 69) person.put() # find a specific person query = Person.all() # every entity! query.filter('age > ', 20) query.order('name') peopleList = query.fetch() # up to 1000 for person in peopleList: print person.name This example performs these operations: Datastore also allows queries in a custom, SQL-like language. The query in the previous example could be expressed as: SELECT * FROM Person WHERE age > 20ORDER BY name ASC Gql is useful for writing more expressive queries than those written in the Python or Java APIs. Do you want to know about specific projects and use cases where NoSQL and data scalability are the hot topics? Join the scalability newsletter:
Outside of the syntax clarification on re.match, I think I am understanding that you are struggling with taking two or more unknown (user input) regex expressions and classifying which is a more 'specific' match against a string. Recall for a moment that a Python regex really is a type of computer program. Most modern forms, including Python's regex, are based on Perl. Perl's regex's have recursion, backtracking, and other forms that defy trivial inspection. Indeed a rogue regex can be used as a form of denial of service attack. To see of this on your own computer, try: >>> re.match(r'^(a+)+$','a'*24+'!') That takes about 1 second on my computer. Now increase the 24 in 'a'*24 to a bit larger number, say 28. That take a lot longer. Try 48... You will probably need to CTRL+C now. The time increase as the number of a's increase is, in fact, exponential. You can read more about this issue in Russ Cox's wonderful paper on 'Regular Expression Matching Can Be Simple And Fast'. Russ Cox is the Goggle engineer that built Google Code Search in 2006. As Cox observes, consider matching the regex 'a?'*33 + 'a'*33 against the string of 'a'*99 with awk and Perl (or Python or PCRE or Java or PHP or ...) Awk matches in 200 microseconds but Perl would require 1015 years because of exponential back tracking. So the conclusion is: it depends! What do you mean by a more specific match? Look at some of Cox's regex simplification techniques in RE2. If your project is big enough to write your own libraries (or use RE2) and you are willing to restrict the regex grammar used (i.e., no backtracking or recursive forms), I think the answer is that you would classify 'a better match' in a variety of ways. If you are looking for a simple way to state that (regex_3 < regex_1 < regex_2) when matched against some string using Python or Perl's regex language, I think that the answer is it is very very hard (i.e., this problem is NP Complete) Edit Everything I said above is true! However, here is a stab at sorting matching regular expressions based on one form of 'specific': How many edits to get from the regex to the string. The greater number of edits (or the higher the Levenshtein distance) the less 'specific' the regex is. You be the judge if this works (I don't know what 'specific' means to you for your application): import re def ld(a,b): "Calculates the Levenshtein distance between a and b." n, m = len(a), len(b) if n > m: # Make sure n <= m, to use O(min(n,m)) space a,b = b,a n,m = m,n current = range(n+1) for i in range(1,m+1): previous, current = current, [i]+[0]*n for j in range(1,n+1): add, delete = previous[j]+1, current[j-1]+1 change = previous[j-1] if a[j-1] != b[i-1]: change = change + 1 current[j] = min(add, delete, change) return current[n] s='Mary had a little lamb' d={} regs=[r'.*', r'Mary', r'lamb', r'little lamb', r'.*little lamb',r'\b\w+mb', r'Mary.*little lamb',r'.*[lL]ittle [Ll]amb',r'\blittle\b',s,r'little'] for reg in regs: m=re.search(reg,s) if m: print "'%s' matches '%s' with sub group '%s'" % (reg, s, m.group(0)) ld1=ld(reg,m.group(0)) ld2=ld(m.group(0),s) score=max(ld1,ld2) print " %i edits regex->match(0), %i edits match(0)->s" % (ld1,ld2) print " score: ", score d[reg]=score print else: print "'%s' does not match '%s'" % (reg, s) print " ===== %s ===== === %s ===" % ('RegEx'.center(10),'Score'.center(10)) for key, value in sorted(d.iteritems(), key=lambda (k,v): (v,k)): print " %22s %5s" % (key, value) The program is taking a list of regex's and matching against the string Mary had a little lamb. Here is the sorted ranking from "most specific" to "least specific": ===== RegEx ===== === Score === Mary had a little lamb 0 Mary.*little lamb 7 .*little lamb 11 little lamb 11 .*[lL]ittle [Ll]amb 15 \blittle\b 16 little 16 Mary 18 \b\w+mb 18 lamb 18 .* 22 This based on the (perhaps simplistic) assumption that: a) the number of edits (the Levenshtein distance) to get from the regex itself to the matching substring is the result of wildcard expansions or replacements; b) the edits to get from the matching substring to the initial string. (just take one) As two simple examples: .* (or .*.* or .*?.* etc) against any sting is a large number of edits to get to the string, in fact equal to the string length. This is the max possible edits, the highest score, and the least 'specific' regex. The regex of the string itself against the string is as specific as possible. No edits to change one to the other resulting in a 0 or lowest score. As stated, this is simplistic. Anchors should increase specificity but they do not in this case. Very short stings don't work because the wild-card may be longer than the string. Edit 2 I got anchor parsing to work pretty darn well using the undocumented sre_parse module in Python. Type >>> help(sre_parse) if you want to read more... This is the goto worker module underlying the re module. It has been in every Python distribution since 2001 including all the P3k versions. It may go away, but I don't think it is likely... Here is the revised listing: import re import sre_parse def ld(a,b): "Calculates the Levenshtein distance between a and b." n, m = len(a), len(b) if n > m: # Make sure n <= m, to use O(min(n,m)) space a,b = b,a n,m = m,n current = range(n+1) for i in range(1,m+1): previous, current = current, [i]+[0]*n for j in range(1,n+1): add, delete = previous[j]+1, current[j-1]+1 change = previous[j-1] if a[j-1] != b[i-1]: change = change + 1 current[j] = min(add, delete, change) return current[n] s='Mary had a little lamb' d={} regs=[r'.*', r'Mary', r'lamb', r'little lamb', r'.*little lamb',r'\b\w+mb', r'Mary.*little lamb',r'.*[lL]ittle [Ll]amb',r'\blittle\b',s,r'little', r'^.*lamb',r'.*.*.*b',r'.*?.*',r'.*\b[lL]ittle\b \b[Ll]amb', r'.*\blittle\b \blamb$','^'+s+'$'] for reg in regs: m=re.search(reg,s) if m: ld1=ld(reg,m.group(0)) ld2=ld(m.group(0),s) score=max(ld1,ld2) for t, v in sre_parse.parse(reg): if t=='at': # anchor... if v=='at_beginning' or 'at_end': score-=1 # ^ or $, adj 1 edit if v=='at_boundary': # all other anchors are 2 char score-=2 d[reg]=score else: print "'%s' does not match '%s'" % (reg, s) print print " ===== %s ===== === %s ===" % ('RegEx'.center(15),'Score'.center(10)) for key, value in sorted(d.iteritems(), key=lambda (k,v): (v,k)): print " %27s %5s" % (key, value) And soted RegEx's: ===== RegEx ===== === Score === Mary had a little lamb 0 ^Mary had a little lamb$ 0 .*\blittle\b \blamb$ 6 Mary.*little lamb 7 .*\b[lL]ittle\b \b[Ll]amb 10 \blittle\b 10 .*little lamb 11 little lamb 11 .*[lL]ittle [Ll]amb 15 \b\w+mb 15 little 16 ^.*lamb 17 Mary 18 lamb 18 .*.*.*b 21 .* 22 .*?.* 22
Every now and then Liraz and I find ourselves chatting about how much we love Python, but more so the lessons we have learned from coding, and how to apply them to create beautiful Python code. I've tried to refine some of the "lessons" into practical guidelines that I apply religiously to all new code I write, and the refactoring of old code I written. When reading other peoples code it sometimes ties my mind into knots, and on occasions I want to pull my hair out in frustration and disgust. That's not to say I'm perfect, but hopefully these guidelines will benefit others (and indirectly help reduce my hair loss). I couldn't possibly include everything I wanted to in one post, so this will be the first, and more will follow... #1 - OO structure == Well defined mental concepts Object Orientated structure should always map to well defined mental concepts in the problem domain. If you don't have a well defined mental model of the problem domain, start with that. Class Responsibility Collaboration (CRC) cards are really useful in this. Basically you need a sketch of the architecture. What parts does your system have, what are their names, what does each part do? What parts is that part made out of? How do the parts interact with each other? You can save quite a bit of effort if you come up with a good architecture up-front, but sometimes it may be easier to start without and figure it out a bit later after you have a bit more knowledge about the problem you are solving. The rule is that the sooner you refine your architecture, the better. It is easy to dig yourself further and further into a complexity hole that makes restructuring very difficult later on. So do that as early as possible. Refining the architecture is part of the "refactor mercilessly" rule. #2 - Leverage built-in Python types It is often a good idea to build on top of built-in Python types or at least emulate them. The big advantage in building on top of Python's conventions is that you get to re-use Python's abstractions instead of reinventing your own. Python is famous for its minimal elegance, and the language designers take great care making small beautiful data structures. Making your code structures more Pythonic is a pretty good way to leverage the built-in elegance of the language while saving yourself quite a bit of work (inventing good abstractions is hard!). If you've never inherited from a built-in data type, experiment with a small test case in a throw away script. This way you don't mangle your current project and you can be as experimental as you like. Tip: Construction can be a bit trickier than a normal object if your initialization interface is different from the built-in type. In that case you'll need to override the __new__ magic method as well to call the __new__ constructor of the base class with different arguments than your __new__ is called. For example: class IntField(int): def __new__(cls, val, name): return int.__new__(cls, val) def __init__(self, val, name): self.name = name self.val = val int.__init__(self, val) As for emulating built in types, Python provides magical method names for emulating any built-in behavior. See special method names for reference. #3 - Use the class namespace, Luke! Use the class namespace when defining class-level attributes and the instance namespace (which inherits from the class namespace on initialization) when defining instance level attributes. For example, constants are always class level attributes because they are shared by all instances. On a practical level there are two reasons to do this: Enable inheritance - you can't override instance level attributes, only class level attributes. Readability - code is communication. Setting an attribute at the class level or instance level is making a statement about the nature of that attribute, which makes the code easier to read and understand. #4 - staticmethod vs. classmethod vs. regular method Static methods are simpler and more readable than class methods because they don't have access to the class attribute, so it's easier to determine its input (i.e., arguments), and output (i.e., the return value). Class methods are simpler than regular methods because the convention is that you don't usually manipulate attributes in the class namespace the way you would manipulate attributes in the instance namespace. So when you call a class method you know its not going to be sneaking around behind you back reading or writing to any instance level attributes set by other methods. The input for a class method is therefore the arguments it receives + class level constants. Its output is the return value. A regular method is the most complex and easy to abuse type of method. Its harder for the programmer to see what the inputs for the method are, and what its output is. Lets consider the following case: imagine a class in which all private methods are called without arguments and do not return output values. Instead, the private methods freely read and write to any attributes in self. The problem with such a pattern, is that all methods have now become entangled in a spaghetti like dependency structure . The instance namespace in effect behaves like a global namespace and it becomes very difficult to understand methods in terms of input and output. Furthermore, the boundaries between methods can easily become fuzzy and unclear. You must never ever do this (I have in the past, and I'm ashamed). If your code exhibits this pattern, go fix it, now! Private methods have input arguments and return values for a reason. The instance namespace must only be used for instance level attributes which need to persist after a public method has been called. Until it becomes second nature, these rules should help: If a method doesn't need access to instance level attributes then it should be a class method, not a regular method. If a method doesn't need access to class level attributes then it should be a static method, not a class method. The Python Paradox Paul Graham wrote an interesting essay entitled The Python Paradox . It's a quick read - take a look.
There's a 1 Gigabyte string of arbitrary data which you can assume to be equivalent to something like: 1_gb_string=os.urandom(1*gigabyte) We will be searching this string, 1_gb_string, for an infinite number of fixed width, 1 kilobyte patterns, 1_kb_pattern. Every time we search the pattern will be different. So caching opportunities are not apparent. The same 1 gigabyte string will be searched over and over. Here is a simple generator to describe what's happening: def findit(1_gb_string): 1_kb_pattern=get_next_pattern() yield 1_gb_string.find(1_kb_pattern) Note that only the first occurrence of the pattern needs to be found. After that, no other major processing should be done. What can I use that's faster than python's bultin find for matching 1KB patterns against 1GB or greater data strings? (I am already aware of how to split up the string and searching it in parallel, so you can disregard that basic optimization.) Update: Please bound memory requirements to 16GB.
#8676 Le 23/02/2013, à 20:54 #8677 Le 23/02/2013, à 20:57 golgoth42 Re : Topic des Couche-Tard (cinquante-sept) salut tout le monde juste pour infos: il y a une fuite nucléaire aux états unis .... une fuite dans un centre de stockage de déchets pour être un peu plus précis et une source genre ça ça aide, même si un canard, spas vraiment une source Hors ligne #8678 Le 23/02/2013, à 21:10 Pylades Re : Topic des Couche-Tard (cinquante-sept) EDIT @pylade : en fait, je vois déjà quelques menus avantages à Archbang. Openbox out of the box avec un thème sympa, c’est déjà pas mal. Hum… /me a du mal a trouver en quoi un thème Openbox a quelque chose à voir avec une distro. En plus, deux trois points de détails souvent délicats pour moi sont gérés automatiquement (le wifi, le son, et surtout, surtout, le montage automatique des clefs). Hum… tu utilises Linux, donc pour le son c’est la merde, faut l’accepter. Je ne vois pas ce que tu appelles la gestion automatique du WiFi. Quant au montage des clefs, c’est le rôle de udisks. Le tout pour moins de 2Go pris à l’installation. Donc c’est pas trop mal. Ce n’est pas énorme pour une Arch ? Donc en gros, si c’est juste une Arch mais où le groupe base contient plus de trucs (qui ne sont pas la base, du coup), je ne vois pas l’intérêt. “Any if-statement is a goto. As are all structured loops. “And sometimes structure is good. When it’s good, you should use it. “And sometimes structure is _bad_, and gets into the way, and using a goto is just much clearer.” Linus Torvalds – 12 janvier 2003 Hors ligne #8679 Le 23/02/2013, à 21:30 na kraïou Re : Topic des Couche-Tard (cinquante-sept) Putain, m’suis coupé avec mon stylo plume, pendant le devoir. Et j’n’ai toujours pas d’avocatier. Et j’ai pris mon super mouton, mais j’ai oublié mon doudou chez moi. Triste ! Intégriste ! Comploteur ! Connard ! Fourbe ! Linuxeux ! Machiavélique ! Moche ! Branleur ! Grognon ! Prétentieux ! Frimeur ! /b/tard ! Futile ! Étudiant ! Médiéviste ! Perfide ! Debianeux ! Futur maître du monde ! Petit (quasi nanos gigantium humeris insidentes) ! Égoïste ! Nawakiste ! Mauvaise langue ! 34709 ! На краю ! Arrogant ! Suffisant ! Ingrat ! Hors ligne #8680 Le 23/02/2013, à 21:31 Grünt Re : Topic des Couche-Tard (cinquante-sept) Red flashing lights. I bet they mean something. Hors ligne #8681 Le 23/02/2013, à 21:46 Shanx Re : Topic des Couche-Tard (cinquante-sept) Shanx a écrit : EDIT @pylade : en fait, je vois déjà quelques menus avantages à Archbang. Openbox out of the boxavec un thème sympa, c’est déjà pas mal. Hum… /me a du mal a trouver en quoi un thème Openbox a quelque chose à voir avec une distro. Moi je vois le rapport. Shanx a écrit : En plus, deux trois points de détails souvent délicats pour moi sont gérés automatiquement (le wifi, le son, et surtout, surtout, le montage automatique des clefs). Hum… tu utilises Linux, donc pour le son c’est la merde, faut l’accepter. Je ne vois pas ce que tu appelles la gestion automatique du WiFi. Quant au montage des clefs, c’est le rôle de udisks. wicd qui se lance automatiquement en daemon, et le wifi qui marche direct (je ne sais pas si tu étais sur irc le jour où je me suis battu avec le wifi sur ma debian, mais j’en ai pas un super souvenir). Shanx a écrit : Le tout pour moins de 2Go pris à l’installation. Donc c’est pas trop mal. Ce n’est pas énorme pour une Arch ? Je crois qu’une install’ de base fais 1,5 Go (si mes souvenirs sont bons). Donc je trouve que ça reste raisonnable. Donc en gros, si c’est juste une Arch mais où le groupe basecontient plus de trucs (qui ne sont pas la base, du coup), je ne vois pas l’intérêt. En même temps, vu toute ta mauvaise foi, c’est sûr que tu ne peux pas y trouver de l’interêt. Hors ligne #8682 Le 23/02/2013, à 21:47 SODⒶ Re : Topic des Couche-Tard (cinquante-sept) Crayon s'est coupé avec son stylo ! Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! Hors ligne #8683 Le 23/02/2013, à 21:51 :!pakman Re : Topic des Couche-Tard (cinquante-sept) amj a écrit : salut tout le monde juste pour infos: il y a une fuite nucléaire aux états unis .... une fuite dans un centre de stockage de déchets pour être un peu plus précis et une source genre ça ça aide, même si un canard, spas vraiment une source Merci pour l'info, je passe voir ... Hors ligne #8684 Le 23/02/2013, à 22:06 na kraïou Re : Topic des Couche-Tard (cinquante-sept) ×10⁴² ! Triste ! Intégriste ! Comploteur ! Connard ! Fourbe ! Linuxeux ! Machiavélique ! Moche ! Branleur ! Grognon ! Prétentieux ! Frimeur ! /b/tard ! Futile ! Étudiant ! Médiéviste ! Perfide ! Debianeux ! Futur maître du monde ! Petit (quasi nanos gigantium humeris insidentes) ! Égoïste ! Nawakiste ! Mauvaise langue ! 34709 ! На краю ! Arrogant ! Suffisant ! Ingrat ! Hors ligne #8685 Le 23/02/2013, à 22:11 na kraïou Re : Topic des Couche-Tard (cinquante-sept) Hey kraiou regarde : http://www.archivesetculture.fr/livre-a … n-344.html J'trouve ça chouette. Tu en as entendu parler ? C’est post-médiéval, c’est d’la merde, c’est du journalisme. Sinon, je ne connais ni l’auteur, ni le livre. Mais c’est assez à la mode, l’histoire de la vie quotidienne, ainsi que les femmes. Triste ! Intégriste ! Comploteur ! Connard ! Fourbe ! Linuxeux ! Machiavélique ! Moche ! Branleur ! Grognon ! Prétentieux ! Frimeur ! /b/tard ! Futile ! Étudiant ! Médiéviste ! Perfide ! Debianeux ! Futur maître du monde ! Petit (quasi nanos gigantium humeris insidentes) ! Égoïste ! Nawakiste ! Mauvaise langue ! 34709 ! На краю ! Arrogant ! Suffisant ! Ingrat ! Hors ligne #8686 Le 23/02/2013, à 22:31 Pylades Re : Topic des Couche-Tard (cinquante-sept) En même temps, vu toute ta mauvaise foi, c’est sûr que tu ne peux pas y trouver de l’interêt. Mais si, je suis très ouvert : qu’est-ce que cela peut m’apporter, concrètement ? Un thème Openbox ? Je suis assez grand pour aller les chercher moi-même. Des tas de logiciels par défaut ? Ce que j’aime avec Arch, c’est que l’on peut dans une certaine mesure construire son système soi-même. Un config de wicd toute faite ? Je suis assez grand pour configurer moi-mêmes logiciels ; d’ailleurs je n’aime pas wicd. Ça valait bien le coup de forker une distro ! Dernière modification par Πυλάδης (Le 23/02/2013, à 22:31) “Any if-statement is a goto. As are all structured loops. “And sometimes structure is good. When it’s good, you should use it. “And sometimes structure is _bad_, and gets into the way, and using a goto is just much clearer.” Linus Torvalds – 12 janvier 2003 Hors ligne #8687 Le 23/02/2013, à 22:41 Shanx Re : Topic des Couche-Tard (cinquante-sept) Ça valait bien le coup de forker une distro ! C’pas vraiment un fork, plus une pré-configuration. Et je maintiens que c’est assez pratique. Hors ligne #8688 Le 23/02/2013, à 22:43 SODⒶ Re : Topic des Couche-Tard (cinquante-sept) Comment l'état a menti aux polynésiens et aux militaires sur les dangers du nucléaire. Edit : on estime le nombre des victimes à 150 000. Juste pour des essais "sans danger". Dernière modification par S.O.D. (Le 23/02/2013, à 23:05) Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! Hors ligne #8689 Le 23/02/2013, à 23:19 na kraïou Re : Topic des Couche-Tard (cinquante-sept) Bon, m’casse bouder avec mon super mouton. Triste ! Intégriste ! Comploteur ! Connard ! Fourbe ! Linuxeux ! Machiavélique ! Moche ! Branleur ! Grognon ! Prétentieux ! Frimeur ! /b/tard ! Futile ! Étudiant ! Médiéviste ! Perfide ! Debianeux ! Futur maître du monde ! Petit (quasi nanos gigantium humeris insidentes) ! Égoïste ! Nawakiste ! Mauvaise langue ! 34709 ! На краю ! Arrogant ! Suffisant ! Ingrat ! Hors ligne #8690 Le 23/02/2013, à 23:56 Вiɑise Re : Topic des Couche-Tard (cinquante-sept) Putain, m’suis coupé avec mon stylo plume, pendant le devoir. http://pix.tdct.org/upload/original/1319145345.gif Tu vends du rêve ! Вiɑise a écrit : Hey kraiou regarde : http://www.archivesetculture.fr/livre-a … n-344.html J'trouve ça chouette. Tu en as entendu parler ? C’est post-médiéval, c’est d’la merde, c’est du journalisme. Sinon, je ne connais ni l’auteur, ni le livre. Mais c’est assez à la mode, l’histoire de la vie quotidienne, ainsi que les femmes. Oh bah tant mieux. J'aime bien. Ça change de l'histoire des meurtriers de masse. Pas que j'aime pas les meurtriers dans l'Histoire, mais je préfère les petits artisans comme Jacques l'éventreur aux bouchers industriels à la Napoléon. Hors ligne #8691 Le 23/02/2013, à 23:58 Pylades Re : Topic des Couche-Tard (cinquante-sept) Tu vends du rêve ! Trop cher ! “Any if-statement is a goto. As are all structured loops. “And sometimes structure is good. When it’s good, you should use it. “And sometimes structure is _bad_, and gets into the way, and using a goto is just much clearer.” Linus Torvalds – 12 janvier 2003 Hors ligne #8692 Le 24/02/2013, à 00:13 Вiɑise Re : Topic des Couche-Tard (cinquante-sept) Argent trop cher la vie n'a pas de prix (pas de prix !) ! ♫ Hors ligne #8693 Le 24/02/2013, à 00:19 #8694 Le 24/02/2013, à 00:30 #8695 Le 24/02/2013, à 00:31 Hors ligne #8696 Le 24/02/2013, à 00:37 #8697 Le 24/02/2013, à 00:43 Grünt Re : Topic des Couche-Tard (cinquante-sept) again. & Red flashing lights. I bet they mean something. Hors ligne #8698 Le 24/02/2013, à 00:49 Shanx Re : Topic des Couche-Tard (cinquante-sept) Des nouvelles de Ras' ? Hors ligne #8699 Le 24/02/2013, à 00:52 SODⒶ Re : Topic des Couche-Tard (cinquante-sept) Plop ! Si vous vous ennuyez, il y a spécimen ici Ça sent un peu le fake Comme à chaque vacances scolaires, on voit débarquer les kevins. Celui-là est sous 10.04, je parie que c'est Backtrack 5r3. Le sacré n'est fait que pour qu'on lui pisse dessus, qu'on lui crache dessus, qu'on le brûle et qu'on le rende kitch et mercantile ! Les idoles ne sont là que pour qu'on les renverse ! yikes Le passé n'est fait que pour amuser quelques historiens et pour être manipulé, tordu, vendu par les autres ! Hors ligne #8700 Le 24/02/2013, à 01:22 nesthib Re : Topic des Couche-Tard (cinquante-sept) Je lui ai répondu ^^ def rainbow(s): from random import randrange out=[] for c in s.decode('utf-8'): if c in [' ', u'\u202f']: out.append(c) else: out.append('[color=#%s]%s[/color]' % ("".join(['%02X' % randrange(256) for i in range(3)]), c)) return ''.join(out) Hors ligne
"Copy and paste coding" (getting bisect's sources into your code) is not recommended as it carries all sorts of costs down the road (lot of extra source code for you to test and maintain, difficulties dealing with upgrades in the upstream code you've copied, etc, etc); the best way to reuse standard library modules is simply to import them and use them. However, to do one pass transforming the dictionaries into meaningfully comparable entries is O(N), which (even though each step of the pass is simple) will eventually swamp the O(log N) time of the search proper. Since bisect can't support a key= key extractor like sort does, what the solution to this dilemma -- how can you reuse bisect by import and call, without a preliminary O(N) step...? As quoted here, the solution is in David Wheeler's famous saying, "All problems in computer science can be solved by another level of indirection". Consider e.g....: import bisect listofdicts = [ {'dt': '2009-%2.2d-%2.2dT12:00:00' % (m,d) } for m in range(4,9) for d in range(1,30) ] class Indexer(object): def __init__(self, lod, key): self.lod = lod self.key = key def __len__(self): return len(self.lod) def __getitem__(self, idx): return self.lod[idx][self.key] lookfor = listofdicts[len(listofdicts)//2]['dt'] def mid(res=listofdicts, target=lookfor): keys = [r['dt'] for r in res] return res[bisect.bisect_left(keys, target)] def midi(res=listofdicts, target=lookfor): wrap = Indexer(res, 'dt') return res[bisect.bisect_left(wrap, target)] if __name__ == '__main__': print '%d dicts on the list' % len(listofdicts) print 'Looking for', lookfor print mid(), midi() assert mid() == midi() The output (just running this indexer.py as a check, then with timeit, two ways): $ python indexer.py 145 dicts on the list Looking for 2009-06-15T12:00:00 {'dt': '2009-06-15T12:00:00'} {'dt': '2009-06-15T12:00:00'} $ python -mtimeit -s'import indexer' 'indexer.mid()' 10000 loops, best of 3: 27.2 usec per loop $ python -mtimeit -s'import indexer' 'indexer.midi()' 100000 loops, best of 3: 9.43 usec per loop As you see, even in a modest task with 145 entries in the list, the indirection approach can have a performance that's three times better than the "key-extraction pass" approach. Since we're comparing O(N) vs O(log N), the advantage of the indirection approach grows without bounds as N increases. (For very small N, the higher multiplicative constants due to the indirection make the key-extraction approach faster, but this is soon surpassed by the big-O difference). Admittedly, the Indexer class is extra code -- however, it's reusable over ALL tasks of binary searching a list of dicts sorted by one entry in each dict, so having it in your "container-utilities back of tricks" offers good return on that investment. So much for the main search loop. For the secondary task of converting two entries (the one just below and the one just above the target) and the target to a number of seconds, consider, again, a higher-reuse approach, namely: import time adt = '2009-09-10T12:00:00' def dttosecs(dt=adt): # string to seconds since the beginning date,tim = dt.split('T') y,m,d = date.split('-') h,mn,s = tim.split(':') y = int(y) m = int(m) d = int(d) h = int(h) mn = int(mn) s = min(59,int(float(s)+0.5)) # round to neatest second s = int(s) secs = time.mktime((y,m,d,h,mn,s,0,0,-1)) return secs def simpler(dt=adt): return time.mktime(time.strptime(dt, '%Y-%m-%dT%H:%M:%S')) if __name__ == '__main__': print adt, dttosecs(), simpler() assert dttosecs() == simpler() Here, there is no performance advantage to the reuse approach (indeed, and on the contrary, dttosecs is faster) -- but then, you only need to perform three conversions per search, no matter how many entries are on your list of dicts, so it's not clear whether that performance issue is germane. Meanwhile, with simpler you only have to write, test and maintain one simple line of code, while dttosecs is a dozen lines; given this ratio, in most situations (i.e., excluding absolute bottlenecks), I would prefer simpler. The important thing is to be aware of both approaches and of the tradeoffs between them so as to ensure the choice is made wisely.
What is the procedure for completely uninstalling a Django app, complete with database removal? Like so. for c in ContentType.objects.all(): if not c.model_class(): print "deleting %s"%c c.delete() django app is a "set" of *.py files and a directory with a django-app-name. So you can simply delete the whole folder with all *.py files To "remove" tables from DB you should use Furthermore, you have to delete lines witgh app-name from setting.py in a root directory
I'm attempting to remove all whitespace from a selected string search using regexp. The code works but it continues to return an error that I'm not sure how to resolve ...? elif searchType =='2': print " Directory to be searched: c:\Python27 " directory = os.path.join("c:\\","SQA_log") userstring = raw_input("Enter a string name to search: ") userStrHEX = userstring.encode('hex') userStrASCII = ' '.join(str(ord(char)) for char in userstring) regex = re.compile(r"(%s|%s|%s)" % ( re.escape( userstring ), re.escape( userStrHEX ), re.escape( userStrASCII ))) choice = raw_input("Type 1 to search with whitespace. Type 2 to search ignoring whitespace: ") if choice == '1': for root,dirname, files in os.walk(directory): for file in files: if file.endswith(".log") or file.endswith(".txt"): f=open(os.path.join(root, file)) for i,line in enumerate(f.readlines()): result = regex.search(line) if regex.search(line): print " " print "Line: " + str(i) print "File: " + os.path.join(root,file) print "String Type: " + result.group() print " " f.close() re.purge() if choice == '2': for root,dirname, files in os.walk(directory): for file in files: if file.endswith(".log") or file.endswith(".txt"): f=open(os.path.join(root, file)) for i,line in enumerate(f.readlines()): result = regex.search(re.sub(r'\s', '',line)) if regex.search(line): print " " print "Line: " + str(i) print "File: " + os.path.join(root,file) print "String Type: " + result.group() print " " f.close() re.purge() This is the error it returns: Line: 9160 File: c:\SQA_log\13.00.log String Type: Rozelle07 Line: 41 File: c:\SQA_log\NEWS.txt String Type: 526f7a656c6c653037 Line: 430 File: c:\SQA_log\README.txt Traceback (most recent call last): File "C:\SQA_log\cmd_simple.py", line 226, in <module> SQAST().cmdloop() File "C:\Python27\lib\cmd.py", line 142, in cmdloop stop = self.onecmd(line) File "C:\Python27\lib\cmd.py", line 219, in onecmd return func(arg) File "C:\SQA_log\cmd_simple.py", line 147, in do_search print "String Type: " + result.group() AttributeError: 'NoneType' object has no attribute 'group'
Hizoka Re : [g2s] GUI d'extraction de fichiers mkv En fait la fonction qui permet d'extraire dans le même dossier, il faut cliquer dessus à chaque fois. Tu pourrais pas mettre une option à cocher dans préférences qui permettrait de la garder toujours active ? Réinitialise les préférences. Par contre il n'y a plus l’icône dans le systray avec la version sans installation (celle que j'utilise pour faire le paquet pour ArchLinux) Vu, c'est une faute de ma part, en attandant que je sorte quelques modifs, modifie à la main la ligne 1500 de mkv-extractor-gui.sh : echo 'SYSTRAY@@systray1@@menu1@@mkv-extractor-gui.png@@MKVExtractorGui' Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv Correction de l'icone de la zone de notification Mise à jour de glade2script Creation des sources en cours. Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv encore une petite maj Utilisation de mktemp pour plus de securite Mise à jour des texte FR et US Ajout de la prise en charge d'unity pour la zone de notification Hors ligne diplo35 Re : [g2s] GUI d'extraction de fichiers mkv Bonjour, j'ai bien une mise à jour qui est proposée mais impossible de la faire pour des raisons de dépendance : mkv-extractor-gui: Dépend: gnome-icon-theme-full but it is not installable Je suis souos ubuntu 10.04 et je n'ai effectivement pas le paquet demandé dans mes dépots. Xubunutu 12.04 Asus N78X AMD Athlon(tm) XP 2800+ Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv ha ? que trouve tu a gnome-icon dans les depots ? en effet il ne semble pas y avoir ce paquet... je fais une version pour lucid sans cette dependance. Dernière modification par Hizoka (Le 30/01/2012, à 02:15) Hors ligne diplo35 Re : [g2s] GUI d'extraction de fichiers mkv Merci, j'ai pu installer la màj sans pbs. je trouve : gnome-icon-theme (je pense que c'est le bon) gnome-icon-theme-blankon gnome-icon-theme-dig-neu gnome-icon-theme-gartoon gnome-icon-theme-gperfection2 gnome-icon-theme-nuovo gnome-icon-theme-suede gnome-icon-theme-yasis Xubunutu 12.04 Asus N78X AMD Athlon(tm) XP 2800+ Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv dans les versions suivantes il y a gnome-icon-theme + gnome-icon-theme-full le full apportant de nouvelles icones, que j'utilise en partie. cool que ca passe hesite pas à me faire des retours si besoin. Dernière modification par Hizoka (Le 31/01/2012, à 20:41) Hors ligne golgot200 Re : [g2s] GUI d'extraction de fichiers mkv Bonjour, J'utilise le PPA, la version 4.7.1-0ppa3 ne passe pas en mise à jour sur la version 4.7.1-0ppa2 je suis sous Maverik 10.10 mkv-extractor-gui: Dépend: gnome-icon-theme-full but it is not installable Bizarre que la version 4.7.1-0ppa2 se soit installée Pas de paquet gnome-icon-theme-full. J'ai tout installé en gnome-icons sauf le gnome-icon-theme-extras qu'il lui refuse aussi. la version 4.7.1-0ppa2 que je viens de désinstaller met pleins de warning à la suppression. Merci par avance de votre aide. Dernière modification par golgot200 (Le 04/02/2012, à 10:34) "L’ultime question … L’intelligence a besoin de la Bêtise pour s’affirmer, La Beauté a besoin de la Laideur pour resplendir, Le Courage naît dans la Peur, Les Forts impressionnent au milieu des Faibles, Mais au final,…Qui a donc besoin d’autant de connards ?" Hors ligne AnsuzPeorth Re : [g2s] GUI d'extraction de fichiers mkv @Hizoka Vu les soucis d'icones que tu as, pourquoi ne pas embarquer dans ton appli les icones que tu as besoin, ca serait plus simple, et plus portable ... Interface graphique pour bash, python ou autre: glade2script Support Tchat: http://chat.jabberfr.org/muckl_int/inde … ade2script (Hors ligne) Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv ouais c'est ce que je vais faire... mais c'et dommage car c'est vrai que ça s'integre mieux quand il utilise les icons du systeme... golgot200 => comme tu dis, c'est etrange... je fais une version sans la dependance. Hors ligne golgot200 Re : [g2s] GUI d'extraction de fichiers mkv Au passage, merci pour ton boulot, j'aime ce genre d'application simple "L’ultime question … L’intelligence a besoin de la Bêtise pour s’affirmer, La Beauté a besoin de la Laideur pour resplendir, Le Courage naît dans la Peur, Les Forts impressionnent au milieu des Faibles, Mais au final,…Qui a donc besoin d’autant de connards ?" Hors ligne TassLehoff Re : [g2s] GUI d'extraction de fichiers mkv Salut Hizoka, A tu un repo debian pour tes applications ? Si tu n'en a pas je serai tres interessé par les rajouter au repo de la distrib "Vanillux". Dernière modification par TassLehoff (Le 12/02/2012, à 09:13) Hors ligne Hors ligne TassLehoff Re : [g2s] GUI d'extraction de fichiers mkv En faite c'est pas moi qui gère la distrib, je ne suis qu'un utilisateur qui aide comme il peut le créateur de cet distrib, donc je n'est aucune idée de comment faire. Peut tu l'ajouter toi même au repo ? Je suis sur le serv irc de la distrib si tu veut en discuter plus longuement irc://irc.vanillux.org/vanillux Hors ligne TassLehoff Re : [g2s] GUI d'extraction de fichiers mkv J'y suis 24/24 donc quand ta 5mn hésite pas a passé et puis le créateur de la distrib te sera certainement d'une meilleur aide que moi. Pour info Vanillux est une rolling release basé essentiellement sur debian testing (wheezy) avec quelque paquet de unstable (sid) et autre divers repo, le tout avec un bureau gnome 3.2 Merci pour tes tools et peut etre de ton aide Hors ligne Stroumph83 Re : [g2s] GUI d'extraction de fichiers mkv Bonjour, Je viens d'installer Precise, et j'ai le problème suivant : trom@trom-Q310:~$ mkv-extractor-gui Traceback (most recent call last): File "/usr/share/mkv-extractor-gui/mkv-extractor-gui.py", line 4333, in <module> import vte ImportError: No module named vte Merci par avance ... LinuxMint 15 après des années avec Ubuntu ... Hors ligne Stroumph83 Re : [g2s] GUI d'extraction de fichiers mkv Nouveau message : trom@trom-Q310:~$ mkv-extractor-gui Traceback (most recent call last): File "/usr/share/mkv-extractor-gui/mkv-extractor-gui.py", line 3725, in PLUGININIT exec('import %s.plugin as plug' % plugin) File "<string>", line 1, in <module> ImportError: No module named g2sPluginMplayer.plugin Traceback (most recent call last): File "/usr/share/mkv-extractor-gui/mkv-extractor-gui.py", line 3744, in PLUGINCMD getattr(self, name).CMD(cmd) AttributeError: 'MyThread' object has no attribute 'MyPlayer' Merci d'avance LinuxMint 15 après des années avec Ubuntu ... Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv Dépendances obligatoires : python-glade2 python-vte mkvtoolnix ffmpeg gettext sed bash mplayer|mplayer2 desktop-file-utils oxygen-icon-theme|gnome-icon-theme-full desktop-file-utils Hors ligne Stroumph83 Re : [g2s] GUI d'extraction de fichiers mkv Apparemment, je dois maintenant avoir les dépendances installées. J'ai installé mplayer2 (désinstallation de mplayer). J'ai toujours cette erreur : trom@trom-Q310:~$ mkv-extractor-gui Traceback (most recent call last): File "/usr/share/mkv-extractor-gui/mkv-extractor-gui.py", line 3725, in PLUGININIT exec('import %s.plugin as plug' % plugin) File "<string>", line 1, in <module> ImportError: No module named g2sPluginMplayer.plugin Traceback (most recent call last): File "/usr/share/mkv-extractor-gui/mkv-extractor-gui.py", line 3744, in PLUGINCMD getattr(self, name).CMD(cmd) AttributeError: 'MyThread' object has no attribute 'MyPlayer' trom@trom-Q310:~$ Merci encore LinuxMint 15 après des années avec Ubuntu ... Hors ligne Stroumph83 Re : [g2s] GUI d'extraction de fichiers mkv Bonjour, J'ai refais une installation propre de Precise, et j'ai donc réinstallé MKV-Extractor-Gui. Les paquets supplémentaires suivants seront installés : ffmpeg gettext imagemagick imagemagick-common libav-tools libavdevice53 libavfilter2 libcdt4 libgettextpo0 libgraph4 libgvc5 liblqr-1-0 libmagickcore4 libmagickcore4-extra libmagickwand4 libnetpbm10 libpathplan4 libunistring0 netpbm python-webkit Paquets suggérés : gettext-doc imagemagick-doc autotrace curl enscript gnuplot grads hp2xx html2ps libwmf-bin povray radiance texlive-base-bin transfig ufraw-batch Paquets recommandés : mkclean mkvalidator Les NOUVEAUX paquets suivants seront installés : ffmpeg gettext imagemagick imagemagick-common libav-tools libavdevice53 libavfilter2 libcdt4 libgettextpo0 libgraph4 libgvc5 liblqr-1-0 libmagickcore4 libmagickcore4-extra libmagickwand4 libnetpbm10 libpathplan4 libunistring0 mkv-extractor-gui netpbm python-webkit J'ai toujours le problème de python-vte qui je suis obligé d'installer à la main. Puis le message déjà mentionné dans le message précédent. Merci par avance LinuxMint 15 après des années avec Ubuntu ... Hors ligne Hizoka Re : [g2s] GUI d'extraction de fichiers mkv J'ai toujours le problème de python-vte qui je suis obligé d'installer à la main. merci, je l'ai ajouté pour la prochaine version. Pour mplayer, y a l'air d'y avoir quelques soucis... si tu ne compte pas visualiser les videos jointes, en attendant que le soucis soit réglé remplace le fichier /usr/share/mkv-extractor-gui/mkv-extractor-gui.sh par celui-ci : http://hizo.fr/linux/mkv_extractor_gui/ … tor_gui.sh Il faut être en root. dis moi si ca passe. Hors ligne
Yesterday I came across a blog post from 2010 that said less than 10% of programmers can write a binary search. At first I thought “ah, what nonsense” and then I realized I probably haven’t written one myself, at least not since BASIC and Pascal were what the cool kids were up to in the 80’s. So, of course, I had a crack at it. There was an odd stipulation that made the challenge interesting — you couldn’t test the algorithm until you were confident it was correct. In other words, it had to work the first time. I was wary of fencepost errors (perhaps being self-aware that spending more time in Python and Lisp(s) than C may have made me lazy with array indexes lately) so, on a whim, I decided to use a random window index to guarantee that I was in bounds each iteration. I also wrote it in a recursive style, because it just makes more sense to me that way. Two things stuck out to me. Though I was sure what I had written was an accurate representation of what I thought binary search was all about, I couldn’t actually recall ever seeing an implementation, having never taken a programming or algorithm class before (and still owning zero books on the subject, despite a personal resolution to remedy this last year…). So while I was confident that my algorithm would return the index of the target value, I wasn’t at all sure that I was implementing a “binary search” to the snob-standard. The other thing that made me think twice was simply whether or not I would ever breach the recursion depth limit in Python on really huge sets. Obviously this is possible, but was it likely enough that it would occur in the span of a few thousand runs over large sets? Sometimes what seems statistically unlikely can pop up as a hamstring slicer in practical application. In particular, were the odds good that a random guess would lead the algorithm to follow a series of really bad guesses, and therefore occasionally blow up. On the other hand, were the odds better that random guesses would be occasionally so good that on average a random index is better than a halved one (of course, the target itself is always random, so how does this balance)? I didn’t do any paperwork on this to figure out the probabilities, I just ran the code several thousand times and averaged the results — which were remarkably uniform. I split the process of assignment into two different procedures, one that narrows the window to be searched randomly, and another that does it by dividing by two. Then I made it iterate over ever larger random sets (converted to sorted lists) until I ran out of memory — turns out a list sort needs more than 6Gb at around 80,000,000 members or so. I didn’t spend any time rewriting to clean things up to pursue larger lists (appending guaranteed larger members instead of sorting would probably permit astronomically huge lists to be searched within 6Gb of memory) but the results were pretty interesting when comparing the methods of binary search by window halving and binary search by random window narrowing. It turns out that halving is quite consistently better, but not by much, and the gap may possibly narrow at larger values (but I’m not going to write a super huge list generator to test this idea on just now). It seems like something about these results are exploitable. But even if they were, the difference between iterating 24 instead of 34 times over a list of over 60,000,000 members to find a target item isn’t much difference in the grand scheme of things. That said, its mind boggling how not even close to Python’s recursion depth limit one will get, even when searching such a large list. Here is the code (Python 2). from __future__ import print_function import random def byhalf(r): return (r[0] + r[1]) / 2 def byrand(r): return random.randint(r[0], r[1]) def binsearch(t, l, r=None, z=0, op=byhalf): if r is None: r = (0, len(l) - 1) i = op(r) z += 1 if t > l[i]: return binsearch(t, l, (i + 1, r[1]), z, op) elif t < l[i]: return binsearch(t, l, (r[0], i - 1), z, op) else: return z def doit(z, x): l = list(set((int(z * random.random()) for i in xrange(x)))) l.sort() res = {'half': [], 'rand': []} for i in range(1000): if x > 1: target = l[random.randrange(len(l) - 1)] elif x == 1: target = l[0] res['half'].append(binsearch(target, l, op=byhalf)) res['rand'].append(binsearch(target, l, op=byrand)) print('length: {0:>12} half:{1:>4} rand:{2:>4}'\ .format(len(l), sum(res['half']) / len(res['half']), sum(res['rand']) / len(res['rand']))) for q in [2 ** x for x in range(27)]: doit(1000000000000, q) Something just smells exploitable about these results, but I can’t put my finger on why just yet. And I don’t have time to think about it further. Anyway, it seems that the damage done by using a random index to make certain you stay within bounds doesn’t actually hurt performance as much as I thought it would. A perhaps useless discovery, but personally interesting nonetheless.
What would be the most elegant\efficient way to reset all fields of a certain model instance back to their defaults? I once did it this way. No better way I could think of. from django.db.models.fields import NOT_PROVIDED for f in instance._meta.fields: if f.default <> NOT_PROVIDED: setattr(instance, f.name, f.default) # treatment of None values, in your words, to handle fields not marked with null=True ... ... # treatment ends instance.save() Note: In my case all the fields, did have default value. Hope it'll help. Happy Coding. After you've made changes to that instance but before you've "saved" it, I assume? I think you'll probably need to re-retrieve it from the database... I don't think that Django model instances keep a "history" of changes that have been made to an instance. def reset( self, fields=[], exclude=[] ): fields = fields or filter( lambda x: x.name in fields, self._meta.fields ) exclude.append( 'id' ) for f in filter( lambda x: x.name not in exclude, fields ): if getattr( f, 'auto_now_add', False ): continue if f.blank or f.has_default(): setattr( self, f.name, f.get_default() )
pilepoil Asus X59SL problème accès internet (seulement Google, BNP) Salut, J'ai depuis peu de temps le Asus X59SL. J'ai mis un double boot Ubuntu et Vista. Sous Vista, pas de soucis d'accès à internet. Sous Ubuntu 8.10 64 bits, j'ai accès uniquement à Google et à BNP (et peut être d'autres, mais pas nombreux). J'ai fait des recherches sur le forum, mais les différentes solutions ne m'ont pas permis de résoudre quoique ce soit. J'ai également rebooté ma freebox qui est configurée en mode routeur. Voilà, si quelqu'un a une grande idée. Merci. - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne Honni Re : Asus X59SL problème accès internet (seulement Google, BNP) J'ai le même problème j'ai essayer pas mal de chose mais rien a faire. La carte de ce portable est une atheros. Perso des fois (rarement) tout internet marche qd je suis branché en réseau mais je n'ai pas compris pk donc impossible de faire fonctionner non stop de plus ce qui m'intéresse est de pouvoir est de pouvoir me connecter en wifi (normal avec un portable). De même aucun pb sur vista pour ce connecter (le monde a l'envers). Et le problème est le mm pour Ubuntu 8.10 32bit. J'ai donc testé avec Ubuntu 8.02 le résultat... bien pire pas du tout moyen de me connecter. Sur le port Ethernet principal de ma freebox le Ubuntu hésite entre me dire qu'il ni a pas de connexion et le fait de demander une adresse réseau (sachant que sa fonctionne tres bien sur mon fixe qui est lui sur Ubuntu 8.02). une fois connecter a l'un des 4 autre port Ethernet de la freebox => Connexion à un réseau câblé mais pas moyen d'accéder a internet et aucun moyen de rentrer ma clef wep... A ce taper la tête contre les mures et j'ai vraiment besoin qu'Ubuntu marche bien et partout (chez moi, fac avec cette saleté de vpn...) donc... HELP pleaseeeeee, je suis sous vista je vais finir mal la sans votre aide pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) Salut, Moi, j'ai remarqué un problème au niveau des TTL de ping, et ce aussi bien sur ubuntu que sous windows. Il y a un TTL de 122 ce qui veut dire qu'il est passé par 133 routeurs différents. Bizarre non ? Pour google, le TTL est de 242, ce qui parait plus normal. Sur ubuntu : nana@nana-pc:~$ ping free.fr PING free.fr (212.27.48.10) 56(84) bytes of data. 64 bytes from www.free.fr (212.27.48.10): icmp_seq=1 ttl=122 time=63.2 ms 64 bytes from www.free.fr (212.27.48.10): icmp_seq=2 ttl=122 time=66.2 ms 64 bytes from www.free.fr (212.27.48.10): icmp_seq=3 ttl=122 time=70.2 ms 64 bytes from www.free.fr (212.27.48.10): icmp_seq=4 ttl=122 time=63.4 ms 64 bytes from www.free.fr (212.27.48.10): icmp_seq=5 ttl=122 time=65.2 ms 64 bytes from www.free.fr (212.27.48.10): icmp_seq=6 ttl=122 time=70.8 ms 64 bytes from www.free.fr (212.27.48.10): icmp_seq=7 ttl=122 time=64.1 ms Sur un autre PC sous windows (XP) : C:\Documents and Settings\doom>ping free.fr Envoi d'une requête 'ping' sur free.fr [212.27.48.10] avec 32 octets de données  : Réponse de 212.27.48.10 : octets=32 temps=62 ms TTL=122 Réponse de 212.27.48.10 : octets=32 temps=71 ms TTL=122 Réponse de 212.27.48.10 : octets=32 temps=71 ms TTL=122 Réponse de 212.27.48.10 : octets=32 temps=69 ms TTL=122 Statistiques Ping pour 212.27.48.10: Paquets : envoyés = 4, reçus = 4, perdus = 0 (perte 0%), Durée approximative des boucles en millisecondes : Minimum = 62ms, Maximum = 71ms, Moyenne = 68ms mais le traceroute donne moins de saut : C:\Documents and Settings\doom>tracert free.fr Détermination de l'itinéraire vers free.fr [212.27.48.10] avec un maximum de 30 sauts : 1 2 ms 1 ms <1 ms 192.168.0.254 2 54 ms 51 ms 51 ms 88.xxx.xxx.254 3 * 52 ms 60 ms 213.228.9.254 4 59 ms 59 ms 60 ms bordeaux-6k-1-v804.intf.routers.proxad.net [212. 27.50.85] 5 66 ms 65 ms * bzn-crs16-1-be1100.intf.routers.proxad.net [212. 27.51.57] 6 63 ms 63 ms 73 ms bzn-6k-sys-po20.intf.routers.proxad.net [212.27. 51.70] 7 74 ms 64 ms 64 ms www.free.fr [212.27.48.10] Itinéraire déterminé. C:\Documents and Settings\doom> Vraiment bizarre !!! Dernière modification par pilepoil (Le 08/01/2009, à 17:51) - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) En fait, je pense que free doit répondre au requete de ping avec un TTL forgé à 128. Donc 128-7=121, à part le 1 qui traine quelque part, c'est bon. - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) Le traceroute via ubuntu est bizarre car vers free, cela ne semble jamais aboutir (c'est reproductible). (Notez que pour windows, cela abouti au 7ème saut). Ci-dessous li'mage du traceroute (attention, longue à charger). Dernière modification par pilepoil (Le 08/01/2009, à 18:19) - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) En fait, le problème se produit lors d'un reboot sous Vista puis retour à Ubuntu, comme le montre la séquence de test ci-dessous (notez que j'ai rebooté au moins dix fois de Ubuntu->Ubuntu avant d'effectuer ce test et tout c'est toujours passé correctement). asus allumé, branchement du port ethernet connexion à ubuntu-fr.org (OK) connexion à portail.free.fr (OK) fermeture de firefox reboot asus Ubuntu->Ubuntu lancement firefox connexion à ubuntu-fr.org (OK) connexion à portail.free.fr (OK) recherche ubuntu sur google (OK) et accès à ubuntu-fr par lien google (OK) fermeture de firefox reboot asus Ubuntu->Vista lancement firefox connexion à ubuntu-fr.org (OK) connexion à portail.free.fr (OK) recherche ubuntu sur google (OK) et accès à ubuntu-fr par lien google (OK) fermeture de firefox reboot asus Vista->Ubuntu (raté en fait, je l'ai mis en veille) reboot asus Vista->Ubuntu lancement firefox connexion à ubuntu-fr.org (marche pas) connexion à portail.free.fr (marche pas) recherche ubuntu sur google (marche !!! (en fait c'était vraisemblablement en cache cf plus bas)) et accès à ubuntu-fr par lien google (marche !!!!!!) accès à la page téléchargement ubuntu (=> marche pas.!!!) accès à la page revendeur ubuntu (=> marche pas.!!!) recherche ubuntu sur google (marche !!!) et accès à ubuntu-fr par lien google (marche !!!!!!) fermeture de firefox reboot asus Ubuntu->Ubuntu (la routine check of drives se lance. Pourquoi ???) lancement firefox connexion à ubuntu-fr.org accès à la page téléchargement ubuntu (=> marche pas.!!!) connexion à portail.free.fr (=> marche pas.!!!) recherche ubuntu sur google (marche) et accès à ubuntu-fr par lien google (marche) effacement cache et session d'identification accès à ubuntu-fr par lien google (marche plus !!!!) fermeture de firefox deconnexion du cable ethernet reconnexion du cable ethernet lancement firefox connexion à ubuntu-fr.org icone réseau avec une croix et plus rien n'est accessible (même pas google.fr). fermeture de firefox reboot asus Ubuntu->Ubuntu toujours la croix sur l'icone réseau lancement firefox recherche ubuntu sur google (marche pas) ifconfig (pas d'inet pour eth0) sudo ifdown eth0 (refusé) sudo ifup eth0 (refusé) extinction du asus reboot freebox (arret/marche) attente fin reboot freebox demarrage du Asus Ubuntu Toujours la croix sur l'icone réseau. lancement firefox recherche ubuntu sur google (marche pas) désactivation mode itinérant -> passage DHCP (=> pas mieux) réactivation du mode itinérant (=>pas mieux) clic droit sur l'icone réseau > décoche activer le réseau clic droit sur l'icone réseau > recoche activer le réseau (=> pas mieux) rebranche correctement le cable :D => icone reseau OK recherche ubuntu sur google (marche) et accès à ubuntu-fr par lien google (marche pas) ifconfig déconnection port éthernet du Asus passage du port freebox 1 au port eth 3 reconnection port éthernet du Asus Toujours pas mieux et là j'abandonne ! Quelqu'un a t'il une idée ???? Dernière modification par pilepoil (Le 08/01/2009, à 19:57) - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) Pour ceux que ça intéresse, j'ai a dispo une capture wireshark : ici (le login et le mot de passe sont ubuntu) Dernière modification par pilepoil (Le 09/01/2009, à 01:51) - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) voici le résultat du ifconfig quand je n'ai accès qu'à google ! Notez les erreurs sur eth0 nana@nana-pc:~$ ifconfig ath0 Link encap:Ethernet HWaddr 00:22:43:xx:xx:f0 adr inet6: fe80::222:43ff:xxxx:89f0/64 Scope:Lien UP BROADCAST MULTICAST MTU:1500 Metric:1 Packets reçus:0 erreurs:0 :0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:0 (0.0 B) Octets transmis:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 00:23:54:xx:xx:2a inet adr:192.168.0.11 Bcast:192.168.0.255 Masque:255.255.255.0 adr inet6: fe80::223:54ff:xxxx:b02a/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:63 erreurs:22 :0 overruns:0 frame:22 TX packets:79 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:23893 (23.8 KB) Octets transmis:12806 (12.8 KB) Interruption:19 Adresse de base:0xdead lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:14 erreurs:0 :0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:964 (964.0 B) Octets transmis:964 (964.0 B) wifi0 Link encap:UNSPEC HWaddr 00-22-43-xx-xx-F0-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:0 erreurs:0 :0 overruns:0 frame:0 TX packets:604 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:280 Octets reçus:0 (0.0 B) Octets transmis:27784 (27.7 KB) Interruption:16 nana@nana-pc:~$ Dernière modification par pilepoil (Le 09/01/2009, à 01:42) - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne laurentb Re : Asus X59SL problème accès internet (seulement Google, BNP) Salut à tous, j'ai acheté cet appareil, et j'ai compilé toutes les infos de mon install dans la doc ici, avec notament ce qu'il faut savoir pour avoir un wifi ok à 100%. http://doc.ubuntu-fr.org/asus_x59sl avec ma freebox HD wifi, j'utilise sudo wifi-radar et la connexion est nickelle. Laurent Laurent Bellegarde, professeur de SVT, 64 Bayonne, France GNU/Linux a l'ecole : www.abuledu.org lprod.org, Montage audio et vidéo libre Lugs : www.euskalug.org, www.abul.org Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) Salut Laurent, On connaissait déjà ta documentation sur le wiki. C'est d'ailleurs grâce à cette page que notre choix s'est porté sur ce PC (on voulait être sûr d'acheter un portable "compatible" Ubuntu). Nous, on souhaite le faire marcher en éthernet filaire (avec le câble) et pas en wifi. Si je comprend bien, toi, tu n'as pas de double-boot. D'après ce que nous avons pu voir, le problème est dû au double-boot, donc normal que t'es pas vu le soucis. Pour info, notre freebox est également une HD. Merci. - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne zebulon25 Re : Asus X59SL problème accès internet (seulement Google, BNP) Bonjour, Pour info : j'ai exactement le même problème avec l'asus X71s en éthernet, je possède une freebox HD, je n'ai pas Vista, ubuntu 8.10(32bits) est en 'monoboot', mais je ne peux que surfer sur google et la BNP (??? en plus la page d'accueil). J'ai redémarrer la freebox mais je n'ai pas fait de hardereboot. Parfois la connexion ethernet(je n'ai même pas essayer le wifi) est reconnue, parfois non, j'essaierai le scénario proposé plus haut ce soir. j'ai essayé Mandriva One (livecd) en pensant que le problème vienne de network-manager mais cela ne change rien. Je ferai un test avec ubuntu 8.10 64 bits, 9.04 64bits alpha 3 et fedora 64bits (on ne sait jamais). Est-ce que quelqu'un a réussi à trouver une parade? Pb bios? Pb drivers? merci pour les infos. Dernière modification par zebulon25 (Le 20/01/2009, à 14:22) Hors ligne 6nome Re : Asus X59SL problème accès internet (seulement Google, BNP) Bonjours, Une amie à acheter cette asus aussi d'ailleur j'ai demandé le remboursement de vista pour n'utilisé que ubuntu mais le problème c'est que idem je peut pas allez plus loin, firefox ne peut pas affiché autre chose que google (même la page de mon routeur ne s'affiche pas) pire encore impossible de téléchargé la liste des dépot synaptic donc pas possible de le mettre en francais, pas possible de mettre à jours et pas possible d'installé le wifi je suis dans une impasse. laurentb Le 11/01/2009, à 17:49 Salut à tous, j'ai acheté cet appareil, et j'ai compilé toutes les infos de mon install dans la doc ici, avec notament ce qu'il faut savoir pour avoir un wifi ok à 100%. http://doc.ubuntu-fr.org/asus_x59sl wink avec ma freebox HD wifi, j'utilise Code: sudo wifi-radar et la connexion est nickelle. Laurent J'ai bien lu la doc mais bon comment faire sans synaptic. Impossible de mettre à jour la liste de mes dépots donc le sudo apt-get impossible. J'espère qu'une solution va vite se débloqué (si je trouve quoique soit de mon coté je posterai au plus vite sur le forum) surtout que j'insite les gens à se tourné vers le libre et de demandé le remboursement de leur windows assez délicat comme situation Hors ligne 6nome Re : Asus X59SL problème accès internet (seulement Google, BNP) Pour dépanné se que j'ai pu faire c'est de compilé le driver Atheros de la carte wifi (sur la doc Atheros AR5007EG n'ayant pas internet sur l'asus je me suis aidé d'une clé usb et d'un autre ordi) internet fonctionne normalement. La personne compte bien utilisé le wifi mais c'est toujours embettant de savoir que le RJ45 fonctionne pas sur son ordi. Je posterai si je trouve une solution sur le sujet. Hors ligne laurentb Re : Asus X59SL problème accès internet (seulement Google, BNP) Salut à tous, je complète mes infos sur cet appareil. hardy 64 bits, filaire 100% ok, wifi ok après compilation intrepid 32 bits, filaire 100% ok, wifi ??? pas détecté semble-t-il, je cherche... intrepid 64 bits, filaire semble-t-il reconnu, tout semble normal, mais rien ne sort sur le net... wifi pas reconnu aussi. je poursuis mes tests. Laurent Laurent Bellegarde, professeur de SVT, 64 Bayonne, France GNU/Linux a l'ecole : www.abuledu.org lprod.org, Montage audio et vidéo libre Lugs : www.euskalug.org, www.abul.org Hors ligne pilepoil Re : Asus X59SL problème accès internet (seulement Google, BNP) Laurentb, On peut savoir pourquoi tu t'es permis de me supprimer des utilisateurs de ce PC dans cet article ??? Nous, on est en intrepid 64bits filaire et ça marche pas. Tu l'as d'ailleurs toi même écrit. Dernière modification par pilepoil (Le 30/04/2009, à 22:54) - Asus X59SL ; Pentium T3200 ; ATI Mobility Radeon HD3470 ; HD 160G & 3G RAM . . . Ubuntu Intrepid Ibex 8.10 (Amd 64 bits) + Vista OS - Acer 5612WLMi ; core duo T2300E ; GeForce Go 7300 TC ; HD 120G & 2G RAM . . . Ubuntu Jaunty Jackalope 9.04 (32bits) + XP via VirtualBox Hors ligne
#1001 Le 25/04/2010, à 17:05 oGu Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Bonjour à tous ! Tout d'abord félicitations pour l'excellente tenue de ce forum et pour tous les scripts que j'ai piqué sur ce topic!! Etant passé de Firefox à IceCat dernièrement, j'aimerais qu'un Ubuntero sachant scripter adapte ce fichier (qui défragmente les données sqlite de Firefox) à mon nouveau browser : #!/usr/bin/env python # coding=utf8 from os import getenv,getuid,kill,waitpid from subprocess import call,Popen,PIPE from os.path import abspath,join,exists from signal import SIGTERM def recup_rep_profiles(): base_profile=join(getenv('HOME'),".mozilla","firefox") profiles_ini=join(base_profile,"profiles.ini") rep_profiles=[] lignes=open(profiles_ini).read().splitlines() for ligne in lignes: ligne=ligne.strip() if ligne.startswith("Path="): rep_profiles.append(join(base_profile,ligne[5:])) return rep_profiles def patch_sessionstore(sessionstore): if not exists(sessionstore): return chaine=open(sessionstore,"rb").read() chaine=chaine.replace('session:{state:"running"}})','session:{state:"stopped"}})') open(sessionstore,"wb").write(chaine) def recup_firefox_pids(): lignes=Popen(['pgrep','-x','firefox','-U',str(getuid())],stdout=PIPE).communicate()[0] firefox_pids=[] for ligne in lignes.splitlines(): ligne=ligne.strip() if not ligne: continue firefox_pids.append(int(ligne)) return firefox_pids def kill_firefox(firefox_pids): for pid in firefox_pids: kill(pid,SIGTERM) # Récupère les chemins vers les profiles profiles=recup_rep_profiles() # Récupère les PID des processus Firefox en cours d’exécution pids=recup_firefox_pids() # Demande confirmation si Firefox est en cours d’exécution if pids: retour=call(['zenity','--question','--title=Attention','--text=Firefox est encours d’exécution\nSi vous cliquez sur Valider, Firefox sera fermé le temps d’effectuer l’optimisation et relancé après']) if retour==1: exit(1) # Arrête Firefox kill_firefox(pids) # Patche les fichiers sessionstore.js for profile in profiles: patch_sessionstore(join(profile,"sessionstore.js")) # Compacte les bases de données progress=Popen(["zenity","--title","Optimisation","--text","Optimisation en cours...","--progress","--pulsate","--auto-close"],stdin=PIPE) for profile in profiles: Popen(['find',profile,'-name','*.sqlite','-print','-exec','sqlite3','{}','VACUUM',';'],stdout=progress.stdin) progress.stdin.close() # Relance Firefox s’il était lancé if pids: Popen(['firefox'],stderr=open("/dev/null"),stdout=open("/dev/null")) Est-ce techniquement possible? Si oui merci d'avance à celui/ceux qui se pencheront sur la transformation de ce code! Bye! Ogu Dernière modification par oGu (Le 25/04/2010, à 17:06) Ubunteros de tous les pays, unissez-vous ! Hors ligne #1002 Le 27/04/2010, à 13:02 bugs néo Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) ce script permet de rendre plus rapide Firefox? jeu de course open source earth-race (le jeu est en réécriture complète depuis janvier, afin de pouvoir aller plus vite par la suite) Hors ligne #1003 Le 27/04/2010, à 14:12 oGu Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Bonjour! Oui, c'est ce qu'il est censé faire : en défragmentant les bases de données (awesome bar etc...) il rend leur accès plus rapide. Après j'ignore si l'effet est réel, ou si c'est un placebo... Quelques liens : -ici la page avec le script originel (tiens, je me rends compte que l'accentuation pose problème avec le script que j'ai posté, ce qui n'est pas le cas sur le script initial) http://zigazou.wordpress.com/2009/05/21/optimisation-et-gain-despace-sous-firefox-3/ -pour les fans de la ligne de commande, la procédure à suivre : http://www.webdevonlinux.fr/2009/04/optimiser-le-demarrage-de-firefox/ Dernière modification par oGu (Le 27/04/2010, à 14:17) Ubunteros de tous les pays, unissez-vous ! Hors ligne #1004 Le 29/04/2010, à 12:46 2F Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) bonjour, un script utile (de débutant mais qui focntionne) pour éviter de se déplacer quand on fait de la maintenance. il permet d'avoir pas mal d'infos sur une machine win et après permet quasiement tout si vous faites vos scripts en bat, reg ou vbs pour les lancer sur la machine distante. il faut : -winexe -zenity -un dossier en lecture seule au moins, accessible depuis le réseau (à renseigner dans gdiag.conf) -la suite pstools à insérer dans ce même dossier ce script peut être très évolutif et très utile, j'utilise moi, une version non graphique avec nos scripts perso du boulot qu'on lance directement par le menu. donc j'ai voulu le rendre général. vos remarques, conseils sont les bienvenu. gdiag.sh #! /bin/bash # Effectue des diagnostiques et rapporte des informations sur des pc win # Licence : GPL # Dépendance : winexe, zenity, la suite pstools sur un serveur, rdesktop, samba # Il faut un accès en lecture seule au moins sur un serveur acessible depuis l'exterieur (a renseigner dans gdiag.conf) #à finir #finir lancement script personnaliser (script) : vérifier la présence du fichier sur le serveur #possibilité de tuer un processus avec tskill PID . ./gdiag.conf pingo="Ping" nmap="Nmap" proc="Processus en cours" heure="Resynchroniser date & heure" service="Services (processus) : start, stop, restart" script="Lancer un script .vbs .reg ou .bat" psinfo="Infos PC (programmes installés, SP, ram, HDD, etc..)" mstsc="Ouvrir port RDP, rdesktop & fermer RDP" msconfig="Afficher msconfig démarrage" browse="Parcourir le disque C:/" logs="Afficher les erreurs du journal d'évènements des 24H" diag="Diag complet" reboot="Rebooter la machine" console="Accéder à la console DOS distante" autre="Passer sur une autre machine" quit="Quitter" machine () { target=$(zenity --entry --title="Gdiag" --text="Nom de machine ou IP"); if ping -c 1 -w 3 $target then mdp=$(zenity --entry --hide-text --title="Gdiag" --text="Mot de passe administrateur"); menu else echo "$target ne ping pas !!" | zenity --text-info --title="$target NE PING PAS !" --width 500 --height 200 && machine fi } machine menu () { rm -Rf res.txt && killall winexe action=`zenity --list --title "$titre" --width 500 --height 550 --radiolist --column=Choix --column "Action" TRUE "$pingo" FALSE "$nmap" FALSE "$proc" FALSE "$heure" FALSE "$service" FALSE "$script" FALSE "$psinfo" FALSE "$mstsc" FALSE "$msconfig" FALSE "$browse" FALSE "$logs" FALSE "$reboot" FALSE "$console" FALSE "$autre" FALSE "$diag" FALSE "$quit"` } while [ "$choix" != "quit" ]; do menu case "$action" in "$pingo") ping $target | zenity --text-info --width 700 --height 500 --title="$pingo de $target" ;; "$nmap") nmap $target | zenity --text-info --width 700 --height 500 --title="$nmap de $target" ;; "$proc") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k qprocess * /system & exit" | zenity --text-info --width 600 --height 800 --title="$proc" ;; "$heure") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k w32tm /resync /rediscover & exit" | zenity --progress --pulsate --auto-close && echo "Allume un cierge mais $target devrait être à l'heure maintenant" | zenity --text-info --title="$heure de $target" --width 400 --height 200 ;; #stopper, relancer ou lancer un service : mettre le nom du service sans l'extension "$service") serv=$(zenity --entry --title="Gdiag" --text="Nom du service"); action=`zenity --list --title "Services" --width 500 --height 550 --radiolist --column=choix --column "Action" TRUE "lancer" FALSE "stopper" FALSE "relancer" FALSE "$quit"` case "$action" in "lancer") acte=start ;; "stopper") acte=stop ;; "relancer") acte=restart ;; "quit") menu ;; esac winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k \\\\"$serveur"\psservice.exe /accepteula $acte $serv & exit" | zenity --progress --pulsate --auto-close && echo "le service $serv est $acte sur $target" | zenity --text-info --width 400 --height 200 --title="$relancetb2 sur $target" ;; # pour lancer un programme bat vbs ou reg depuis le dossier du serveur : mettre le nom exacte du fichier avec extension "$script") #choix du fichier script file=$(zenity --entry --title="Gdiag" --text="Nom du script dans le dossier $serveur avec son extension :"); #vérifie l'extension du fichier ext=`echo $file | grep -o '\.[^.]*$'` case $ext in ".bat") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k start \\\\"$serveur"\\"$file" & exit" | zenity --progress --pulsate --auto-close && echo "$file lancé sur $target" | zenity --text-info --title="Lancement de $file sur $target" --width 400 --height 200 ;; ".BAT") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k start \\\\"$serveur"\\"$file" & exit" | zenity --progress --pulsate --auto-close && echo "$file lancé sur $target" | zenity --text-info --title="Lancement de $file sur $target" --width 400 --height 200 ;; ".vbs") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k cscript \\\\"$serveur"\"$file" & exit"" | zenity --progress --pulsate --auto-close && echo "$file lancé sur $target" | zenity --text-info --title="Lancement de $file sur $target" --width 400 --height 200 ;; ".VBS") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k cscript \\\\"$serveur"\"$file" & exit"" | zenity --progress --pulsate --auto-close && echo "$file lancé sur $target" | zenity --text-info --title="Lancement de $file sur $target" --width 400 --height 200 ;; ".reg") winexe -U "$target"/administrateur%$mdp //"$target" "regedit /S \\\\"$serveur"\"$file" & exit"" | zenity --progress --pulsate --auto-close && echo "$file lancé sur $target" | zenity --text-info --title="Lancement de $file sur $target" --width 400 --height 200 ;; ".REG") winexe -U "$target"/administrateur%$mdp //"$target" "regedit /S \\\\"$serveur"\"$file" & exit"" | zenity --progress --pulsate --auto-close && echo "$file lancé sur $target" | zenity --text-info --title="Lancement de $file sur $target" --width 400 --height 200 ;; *) zenity --warning --text="extension non supportée" ;; esac ;; #liste les infos pc "$psinfo") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k \\\\"$serveur"\psinfo.exe -d -s /accepteula & exit" | zenity --text-info --title="$psinfo de $target" --width 700 --height 300 ;; #ouvre le port RDP, lance rdesktop et ferme le port RDP "$mstsc") winexe -U "$target"/administrateur%$mdp //"$target" "regedit /S \\\\"$serveur"\mstscon.reg" notify-send -i /usr/share/icons/gnome/scalable/apps/gnome-terminal.svg Hop "Ouverture du Terminal serveur client sur "$target"" && sleep 5 && rdesktop "$target" && winexe -U "$target"/administrateur%$mdp //"$target" "regedit /S \\\\"$serveur"\mstscoff.reg" && notify-send -i /usr/share/icons/gnome/scalable/apps/gnome-terminal.svg Hop "Fermeture du Terminal serveur client sur "$target"" ;; #msconfig "$msconfig") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k reg query HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run & exit" > res.txt 2>&1 | zenity --progress --pulsate --auto-close cat res.txt |grep REG_SZ | zenity --text-info --title="$msconfig de $target" --width 700 --height 300 ;; "$browse") `nautilus smb://administrateur@"$target"/c$` ;; #affiche les erreurs du journal d'évènements des 24H "$logs") winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k \\\\"$serveur"\psloglist.exe -d 1 -f e /accepteula & exit" >res.txt | zenity --progress --pulsate --auto-close && cat res.txt|grep -e ] -e Time| zenity --text-info --title="$autorun de $target" --width 400 --height 200 ;; #ping, nmap, liste processus, infos pc, msconfig et erreurs journal évènement des 24H "$diag") rm -f resultat.txt ping3=`ping -c 3 "$target"` echo -n "#### \\033[1;31mPING\\033[0m ####" echo -n "#### PING ####" > resultat.txt echo " " >> resultat.txt echo " " >> resultat.txt echo -n "$ping3" >> resultat.txt echo " " >> resultat.txt echo " " >> resultat.txt echo -n "#### \\033[1;31mNMAP\\033[0m ####" && echo " " >> resultat.txt echo "#### NMAP ####" >> resultat.txt echo " " >> resultat.txt gnome-terminal -x nmap "$target" -o nmap.txt && sleep 4 && cat nmap.txt |grep -v Nmap | grep -v Interesting >> resultat.txt echo -n "#### \\033[1;31mINFOS PC\\033[0m ####" echo -n "#### INFOS PC ####" >> resultat.txt winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k \\\\"$serveur"\psinfo.exe -d -s /accepteula & exit" >> resultat.txt echo " " >> resultat.txt echo -n "#### \\033[1;31mPROCESSUS\\033[0m ####" echo -n "#### PROCESSUS ####" >> resultat.txt echo " " >> resultat.txt echo " " >> resultat.txt winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k qprocess * /system & exit" >> resultat.txt echo -n "#### \\033[1;31mMSCONFIG\\033[0m ####" echo " " >> resultat.txt echo -n "#### MSCONFIG ####" >> resultat.txt echo " " >> resultat.txt winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k reg query HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run & exit" > res.txt && cat res.txt |grep REG_SZ >> resultat.txt echo " " echo "rapport terminé" && clear && cat resultat.txt | zenity --text-info --title="$diag de $target" --width 800 --height 600 ;; #reboot "$reboot") net rpc shutdown -r -C "Votre ordianteur va redémarrer" -f -I "$target" -U administrateur%$mdp | zenity --text-info --title="reboot de $target en cours..." --width 400 --height 400 ;; #ok, tapez exit pour terminer le processus correctement "$console") gnome-terminal -x winexe -U "$target"/administrateur%$mdp //"$target" "cmd /k qprocess * /system" & ;; "$autre") machine ;; "$quit") break;; *) esac done exit 0 le fichier de conf pauvre mais utile pour que ca serve à tout le monde gdiag.conf #mettre de la forme serveur="ip-serveur\mon dossier\scripts" serveur= et 2 tout petit .reg (a mettre aussi dans le dossier) qui servent dans le script... à vous de faire les votre après : mstscon.reg Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]"fDenyTSConnections"=dword:00000000 mstscoff.reg Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]"fDenyTSConnections"=dword:00000001 voilà. Bon app! Dernière modification par 2F (Le 04/05/2010, à 17:02) Hors ligne #1005 Le 08/05/2010, à 16:28 malagasy Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Bonjour, je n'y connais rien du tout en script, alors je voudrai savoir si quelqu'un peut me faire un petit script qui lance au démarrage de gnome: compiz --replace puisque depuis la mise à jour vers lucid, gnome n'aime pas trop compiz, et il faut que je lance cette commande à chaque fois, Merci Hors ligne #1006 Le 08/05/2010, à 17:16 Levi59 Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Bonjour, je n'y connais rien du tout en script, alors je voudrai savoir si quelqu'un peut me faire un petit script qui lance au démarrage de gnome: compiz --replace puisque depuis la mise à jour vers lucid, gnome n'aime pas trop compiz, et il faut que je lance cette commande à chaque fois, Merci pourquoi un script? il te suffit de mettre "compiz-replace" dans les programmes au démarrage non? Au pire si il faut le lancer en décalé, met "sleep X && compiz-replace" en remplaçant X par le nombre de seconde nécessaires. Hors ligne #1007 Le 08/05/2010, à 20:51 #1008 Le 08/05/2010, à 21:24 Levi59 Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) c'est pour ça que j'adore cette communauté .. merci, ça marche À vot' service M'sieurs! ^^ Dernière modification par Levi59 (Le 08/05/2010, à 21:25) Hors ligne #1009 Le 08/05/2010, à 22:59 #1010 Le 18/05/2010, à 10:16 yamo Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Bonjour, Comme j'en avait marre de synaptic je me suis fait un script de mise à jour basé sur apt-get. Tout d'abord, j'ai pris l'habitude de mettre tous mes scripts dans ~/bin ça permet d'avoir un répertoire home un peu rangé! Et de les lancer très simplement sans devoir ajouter ./ au début Vous aller trouver étrange que je commence tout de suite par mettre à jour mais c'est par ce que j'utilise cron-apt qui permet de télécharger en tâche de fond les paquets, comme ça les mises paraissent encore plus rapides! Attention, la mise à jour d'un système n'est pas triviale, lisez ces scripts avant de les utiliser. Personnellement, j'essaie toujours de comprendre ce que va faire une commande avant de l'exécuter. Je n'ai laissé que la version console, la version graphique n'a pas vraiment d’intérêt vu que ça fonctionne bien de base en graphique. ~/bin/update.console.sh : #!/bin/bash # ~/bin/update.console.sh # mise à jour du 11/12/2011 echo "mise à jour" && sudo nice -n 19 ionice -c 3 apt-get dist-upgrade -y && echo "nettoyage" && sudo nice -n 19 ionice -c 3 apt-get clean && echo "nettoyage" && sudo nice -n 19 ionice -c 3 apt-get autoclean && echo "coup de balai" && sudo nice -n 19 ionice -c 3 apt-get autoremove && echo "toc toc toc" && sudo nice -n 19 ionice -c 3 apt-get -qq update && ## l'usage de -qq est totalement déconseillé en dehors d'update! echo "mise à jour" && sudo nice -n 19 ionice -c 3 apt-get dist-upgrade -y && echo "dernieres mises à jour" && grep -v '\(half\|configure\|trigproc\|triggers-pending\|startup\|install-info\|unpacked\|config-files\|triggers-awaited\|installed\)' /var/log/dpkg.log Pour tester mon script voici un script d'installation : mkdir ~/bin & cd ~/bin/ && wget http://pasdenom.info/scripts/update.console.sh && chmod 554 ~/bin/update.console.sh Enfin pour lancer le script il suffit de taper en console : update.console.sh Dernière modification par yamo (Le 11/12/2011, à 12:31) Hors ligne #1011 Le 28/05/2010, à 12:49 kazylax Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Bonjour, un script existe pour Emesene ? de msn messenger Cordialement, Hors ligne #1012 Le 28/05/2010, à 13:11 kyncani Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) J'aimerais savoir comment récupérer la liste des paquets installé manuellement. Je sais récupérer la liste de tous les paquet installer, mais c'est très casse pied de la nettoyer pour ne conserver que les 60 intéressants... J'ai donc mis les tag "auto" qu'il faut dans aptitude mais je ne sais pas comment en extraire la liste aptitude search '~i!~M' Hors ligne #1013 Le 28/05/2010, à 20:56 Phendrax Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Yop J'ai écrit une petite interface en python pour xinput que j'ai appelé gtk-xinput. Cela permet de facilement créer des pointeurs à l'écran et de leur associer des souris. L'interface affiche donc la liste des pointeurs et chaque souris qui lui est associée. Sur le screen on peut voir par exemple que Logitech G500 est associée au pointeur dont l'id=2. On peut créer des nouveaux pointeurs à l'écran avec le bouton "Ajouter" (en leur donnant un nom contenant uniquement des caractères alphanumériques) et en supprimer avec le bouton "Supprimer". Le bouton "Recharger" sert à rafraichir l'affichage, par exemple si on vient de brancher une nouvelle souris. Par défaut les souris sont associées au pointeur Virtual core. Il suffit de les glisser vers le pointeur que l'on veut. Pour l'installer il est nécessaire d'avoir python, pygtk, libglade et xinput sudo apt-get install python python-gtk2 libglade2-0 xinput Le paquet debian de gtk-xinput peut être téléchargé ici : http://dl.free.fr/getfile.pl?file=/zoIHihCO Le programme peut ensuite être lancé avec la commande gtk-xinput. [Edit : il semblerait que le programme ne fonctionne que sous Lucid Lynx notamment à cause de la méthode set_visible() qui ne semble pas exister dans les versions précédentes et à cause de l'affichage de la commande xinput qui diffère selon les versions] Dernière modification par Phendrax (Le 28/05/2010, à 21:47) HP Pavillon dv6800 - Ubuntu 10.10 - GNOME 2.32.0 Hors ligne #1014 Le 29/05/2010, à 01:42 BorX Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Marrant Ça permet d'utiliser plusieurs souris en même temps ?? Autant je trouve ça vraiment intéressant, autant j'ai du mal à voir dans quel contexte l'utiliser... A plusieurs ? Pour les ambidextres ? Hors ligne #1015 Le 29/05/2010, à 04:43 kyncani Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Deux personnes sur une même machine Hors ligne #1016 Le 29/05/2010, à 12:01 hulk Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) MON scripte iptables utilisation iptables start ou iptables stop a placer dans /etc/init.d/ et a rendre exécutable puis l'activer avec update-rc.d #! /bin/sh ### BEGIN INIT INFO # Provides: iptables # Required-Start: # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 1 6 # Short-Description: script iptables ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin case "$1" in start) ### SUPRESSION de TOUTES LES ANCIENNES TABLES (OUVRE TOUS!!) ### iptables -F iptables -X ### BLOC TOUS !! (seules les autorisations des raigle aprés celle-ci sont prise en compte) ### iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP ### ACCEPT ALL interface loopbaak ### iptables -I INPUT -i lo -j ACCEPT iptables -I OUTPUT -o lo -j ACCEPT ### axepte en entrer les connection deja etablie (en gros sa permet d'axépter que les conection inicier ### par sont propre PC) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT ### DNS indispensable pour naviguer facilement sur le web ### iptables -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT ### HTTP navigation internet non sécuriséer ### iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT ### HTTPS pour le site de banque .... ### iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT ### emesene,pindgin,amsn... #### iptables -A OUTPUT -p tcp -m tcp --dport 1863 -j ACCEPT ### pop thunderbird ... réceptions des message #### iptables -A OUTPUT -p tcp -m tcp --dport 110 -j ACCEPT ### smtp thunderbird ... envoi des messages ### iptables -A OUTPUT -p tcp -m tcp --dport 25 -j ACCEPT ### ntpdate ( client ntp )... sincro a un serveur de temp ### iptables -A OUTPUT -p udp -m udp --dport 123 -j ACCEPT ### client-transmition iptables -A OUTPUT -p udp -m udp --sport 51413 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp -s 192.168.1.2/255.255.255.255 --sport 30000:65000 -o eth0 -j ACCEPT # remplacer 192.168.1.2 par votre adresse ip et eth0 par l'interface connecter a internet. ### ping ... autorise a pinger un ordinateur distent ### iptables -A OUTPUT -p icmp -j ACCEPT ### ping ... autorise l'extèrieur a vous pinger ### # iptables -A INPUT -p icmp -j ACCEPT # enlever le # du début de ligne pour activer cette règle ### LOG ### Log tous ce qui qui n'est pas accepter par une règles précédente # prés requit : sudo apt-get install sysklogd # echo 'kern.warning /var/log/iptables.log' > /etc/syslog.conf iptables -A OUTPUT -j LOG --log-level 4 iptables -A INPUT -j LOG --log-level 4 iptables -A FORWARD -j LOG --log-level 4 ;; stop) ### OUVRE TOUS !! ### iptables -F iptables -X ;; *) N=/etc/init.d/${0##*/} echo "Usage: $N {start|stop}" >&2 exit 1 ;; esac exit 0 pour vérifier les log en temps réel 10 dernier entréer. tail -f /var/log/iptables.log Dernière modification par hulk (Le 07/12/2010, à 23:15) Amilo A 1667G , turion64 , X700 . debian squeeze amd64 driver libre radeon Hors ligne #1017 Le 31/05/2010, à 20:56 shamanphenix Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Pendant que je vous lisais, ça m'a donné envie, alors j'ai rapidement fait un premier jet (il faudrait que je rajoute un vidage de la corbeille, des miniatures Nautilus, etc. mais là n'est pas la question pour le moment ) pour mettre à jour mon système : #!/bin/sh # Permets de nettoyer et mettre à jour son système. # Dépendance(s) : libnotify-bin ou zenity # zenity --info --text="Mise à jour du système" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-working.png "Mise à jour du système" # Mettre à jour la liste des fichiers disponibles dans les dépôts : # zenity --info --text="Mise à jour de la liste des fichiers disponibles dans les dépôts" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-update-cache.png "Mise à jour de la liste des fichiers disponibles dans les dépôts" sudo apt-get update # Mettre à jour tous les paquets installés vers les dernières versions : # zenity --info --text="Mise à jour de tous les paquets installés vers les dernières versions" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-upgrade.png "Mise à jour de tous les paquets installés vers les dernières versions" sudo apt-get upgrade -y # Mettre à jour tous les paquets installés vers les dernières versions en installant de nouveaux paquets si nécessaire : # zenity --info --text="Mise à jour de tous les paquets installés vers les dernières versions en installant de nouveaux paquets si nécessaire" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-upgrade.png "Mise à jour de tous les paquets installés vers les dernières versions en installant de nouveaux paquets si nécessaire" sudo apt-get dist-upgrade -y # Supprimer les copies des paquets installés : # zenity --info --text="Suppression des copies des paquets installés" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-cleanup.png "Suppression des copies des paquets installés" sudo apt-get clean # Supprimer les copies des paquets désinstallés : # zenity --info --text="Suppression copies des paquets désinstallés" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-cleanup.png "Suppression copies des paquets désinstallés" sudo apt-get autoclean -y # zenity --info --text="Le système a été mis à jour" notify-send --icon=/usr/share/icons/hicolor/48x48/status/aptdaemon-upgrade.png "Le système a été mis à jour" Alors je sais, c'est super mal de mettre des "sudo" dans un script, et en plus c'est tout pourri puisque dans ce cas on est obligé de lancer le bouzin dans un terminal pour saisir le mot de passe root alors que j'aurais bien voulu le mettre dans mes scripts Nautilus... Est-ce que l'un(e) d'entre vous aurait une idée géniale, par hasard ? [edit]PS : c'est dommage que la balise "code" du forum ne colorie pas le texte, ça aiderait beaucoup à la lisibilité.[/edit] Dernière modification par shamanphenix (Le 31/05/2010, à 20:57) "Ubuntu". Si tous les gens du monde voulaient bien se tenir par la main... ce serait bien plus facile pour les électrocuter. What did /home/user/DARTHVADER say to /home/user/DARTHVADER/LUKESKYWALKER ?I AM YOUR PARENT FOLDER. Hors ligne #1018 Le 31/05/2010, à 21:01 TatrefThekiller Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) gksudo ? Hors ligne #1019 Le 31/05/2010, à 21:12 shamanphenix Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) gksudo ? En effet c'est une bonne idée, je pourrais remplacer le premier "sudo" par un "gksudo" (ça devrait suffire pour les autres, non ?), mais je dois avouer que je préfèrerais que cela soit entièrement automatisé (je sais saymal). "Ubuntu". Si tous les gens du monde voulaient bien se tenir par la main... ce serait bien plus facile pour les électrocuter. What did /home/user/DARTHVADER say to /home/user/DARTHVADER/LUKESKYWALKER ?I AM YOUR PARENT FOLDER. Hors ligne #1020 Le 31/05/2010, à 22:00 kyncani Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Ce genre de truc en en-tête peut marcher je crois : if test `id -u` -ne 0; then if test "$DISPLAY" = ""; then exec sudo "$0" "$@" else exec gksudo "$0" "$@" fi exit 1 fi Hors ligne #1021 Le 01/06/2010, à 00:04 BorX Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) Ou sinon o Donner au script les droits root : -rwx------ root root leScript o Y virer tous les sudo qui sont à l'intérieur o Et l'appeler avec sudo leScript Ce qui continuera à demander un mot de passe à son lancement, sauf si on modifie le fichier /etc/sudoers de façon à ce script puisse être exécuté avec un sudo sans mot de passe. Quelque chose du style %sudo ALL=NOPASSWD: leScript Mais je ne me souviens plus trop de la syntaxe. C'est pas compliqué, faut chercher un tout petit peu Hors ligne #1022 Le 01/06/2010, à 08:21 draco31.fr Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) [edit]PS : c'est dommage que la balise "code" du forum ne colorie pas le texte, ça aiderait beaucoup à la lisibilité.[/edit] C'est possible sur le wiki si tu indiques le langage ... mais à ma connaissance, ça n'a pas été prévu pour le forum. Cela dit, il te suffit d'en faire la demande sur le topic de la nouvelle version du forum, ou de créer un rapport de bug sur launchpad pour le projet ubuntu-fr.org Hors ligne #1023 Le 01/06/2010, à 12:38 Levi59 Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) shamanphenix a écrit : [edit]PS : c'est dommage que la balise "code" du forum ne colorie pas le texte, ça aiderait beaucoup à la lisibilité.[/edit] C'est possible sur le wiki si tu indiques le langage ... mais à ma connaissance, ça n'a pas été prévu pour le forum. Cela dit, il te suffit d'en faire la demande sur le topic de la nouvelle version du forum, ou de créer un rapport de bug sur launchpad pour le projet ubuntu-fr.org Ou encore de créer un plugin FF bien que ce ne soit pas du tout dans mes cordes... ^^ Si quelqu'un en est capable, voila un bon challenge! Hors ligne #1024 Le 01/06/2010, à 22:41 Nik0s Re : [ VOS SCRIPTS UTILES ] (et eventuelles demandes de scripts...) J'ai créé un petit script pour récupérer des wallpapers sur Interfacelift. Il y a, je pense, possibilité de l'améliorer, mais en voici le code retiré Dernière modification par Nik0s (Le 03/06/2010, à 20:46) Hors ligne
In brief: I have a problem with compiling vim with preferred python version. When I use --enable-pythoninterp it compiles with system OSX python version. When I use --enable-pythoninterp=dynamic I get an error in vim while trying :py import sys Here is what I was doing in more detail: % git clone https://github.com/b4winckler/macvim.git % cd macvim % ./configure --enable-pythoninterp \ --with-python-config-dir=/usr/local/lib/python2.7/config <- this option has no affects on result ... checking for python... /usr/local/bin/python checking Python version... 2.7 checking Python is 1.4 or better... yep checking Python's install prefix... /usr/local checking Python's execution prefix... /usr/local checking Python's configuration directory... /usr/local/lib/python2.7/config ... % make ... ** BUILD SUCCEEDED ** % open src/MacVim/build/Release/MacVim.app In the opened MacVim I type: :py import sys; print (sys.version, sys.executable) ('2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)]', '/usr/bin/python') Why 2.6.1?Why /usr/bin/python?My default python is 2.7! And it lives at /usr/local/bin/python I was searching for solution all day. And I found it. It is =dynamic attribute (but this solution had not explanation). After that I tried to recompile vim with dynamic python: % ./configure --enable-pythoninterp=dynamic ... output the same ... % make % open src/MacVim/build/Release/MacVim.app In opened MacVim: :py import sys And here comes an error: E370: Could not load library libpython2.7.aE263: Sorry, this command is disabled, the Python library could not be loaded. My OSX version is 10.6.8. Default python version is 2.7. % which python /usr/local/bin/python Can anybody explain how python is integrating into vim during the compilation?And how to fix the error with libpython2.7.a? update: I no longer have the environment described at the question. So I couldn't test new answers. But remaining part of mankind will appreciate your help.
If I try to spawn a process from a pylons controller, the server does not close the connection after sending the reply. Assume that test.py is a long running process, then this method in a pylons controller creates a response, but keeps the connection open: def index(self): from subprocess import Popen Popen(["python", "/temp/test.py"]) return "<h1>Done!</h1>" Moving the Popen to a new thread did not help. Luckily, I found a workaround: I start a new thread and add a sleep before the Popen. If the Popen starts after the response has been sent, the connection will be closed properly. def test(self): def worker(): import time time.sleep(5) from subprocess import Popen Popen(["python", "/temp/test.py"]) from threading import Thread Thread(target=worker).start() return "<h1>Done!</h1>" Can anyone explain this behavior? I'd like to be sure I won't be causing strange problems down the line. I'm using Python 2.5 and Pylons 0.9.6.1 on Windows XP SP3. UPDATE: bnonlan's answer is definitely on the right track. Popen has a parameter named close_fds which is supposed to solve this problem. In the Python 2.5 version of the subprocess module, this parameter is unsupported on Windows. However, in the Python 2.6 version you can set this parameter to True if you don't redirect stdin/stdout/stderr. def index(self): # I copied the 2.6 version of subprocess.py into my tree from python26.subprocess import Popen Popen(["python", "/temp/test.py"], close_fds=True) return "<h1>Done!</h1>" Sadly, I do want to redirect stdout, so I need to find another solution. Also, this implies that the workaround I found might keep other requests from closing if they happen to be running when the Popen executes. This is troubling.
I'm not sure if the title gave the best description. But this here is my problem. I have a class named 'Ball'. Each ball has its own width, radius and color. My code worked great while I was adding my own balls before the loop eg. ball1 = Ball() .... ball2 = Ball() I'm using pygame and what I want is so that whenever I press 'Space' it adds another ball with it's own characteristics. I have it so that it randomly gives a width, radius and color. Here is my code: BLACK = (0,0,0) WHITE = (255, 255, 255) BLUE = (0, 0, 255) RED = (255, 0, 0) GREEN = (0, 255, 0) fps = 30 color_list = [BLACK, BLUE, GREEN, RED] col = None siz = None wit = None posx = None posy = None ball = None class Ball: ballCount = 0 def __init__(self): Ball.ballCount +=1 self.col = random.choice(color_list) self.siz = random.randint(20, 80) self.wit = random.randint(1, 3) self.posx = random.randint(self.siz, width-self.siz) self.posy = random.randint(self.siz, height-self.siz) def blitball(self): pygame.draw.circle(screen, self.col, (self.posx, self.posy),self.siz, self.wit) def move(self): self.posx+=1 ball2 = Ball() ball1 = Ball() ball3 = Ball() while True: amount = 0 event = pygame.event.poll() keys = pygame.key.get_pressed() if event.type == QUIT: pygame.quit() sys.exit() if keys[K_q]: pygame.quit() sys.exit() ############################################################# if keys[K_SPACE]: eval("ball"+str(Ball.ballCount+1)) = Ball() ############################################################# screen.fill(WHITE) for r in range(Ball.ballCount): amount+=1 eval("ball"+str(amount)).move() eval("ball"+str(amount)).blitball() pygame.time.wait(int(1000/fps)) pygame.display.update() The for r in range(Ball.ballCount): is used so I don't have to type the functions for every ball. This works perfectly. If you know an easier way let me know. So what I need to add is: if keys[K_SPACE]: #add another ball, ball3, ball4,..etc. If this means changing some of my code please feel free to tell me or even do so yourself. Thanks for the replies in advance. (I HAVE MY PROBLEM WITHIN THE HASHTAGS) Dennis
It occurred to me that you may have actually be asking how to implement the + operator for dictionaries, the following seems to work: >>> class Dict(dict): ... def __add__(self, other): ... copy = self.copy() ... copy.update(other) ... return copy ... def __radd__(self, other): ... copy = other.copy() ... copy.update(self) ... return copy ... >>> default_data = Dict({'item1': 1, 'item2': 2}) >>> default_data + {'item3': 3} {'item2': 2, 'item3': 3, 'item1': 1} >>> {'test1': 1} + Dict(test2=2) {'test1': 1, 'test2': 2} Note that this is more overhead then using dict[key] = value or dict.update(), so I would recommend against using this solution unless you intend to create a new dictionary anyway.
A lesson in refactoring Some advice to a fellow developer on Reddit’s “Ask Anything Monday" the original code looked like this: cad_script.write("TEXT\n") cad_script.write("Justify\n") cad_script.write("TL\n") # Top-left justification cad_script.write("0.30,0.30\n") cad_script.write("0.5\n") cad_script.write("90\n") cad_script.write("H = 0) cad_script.write("TEXT\n") cad_script.write("Justify\n") cad_script.write("TL\n") # Top-left justification cad_script.write("%.2f,0.30\n" % (float(coord_list_x[-1]) + 0.30)) cad_script.write("0.5\n") cad_script.write("90\n") cad_script.write("H = 0") This was my advice: I think first you need to separate the data from the code, that is, separate the function you’re running from the parameters you’re passing to it so a first refactor could be this: def cad_writelines(lines): for line in lines: cad_script.write(line) and use it like this: textblock1 = ( "TEXT\n", "Justify\n", "TL\n", # Top-left justification "0.30,0.30\n", "0.5\n", "90\n", "H = 0", ) cad_writelines(textblock1) textblock2 = ( "TEXT\n", "Justify\n", "TL\n", # Top-left justification "%.2f,0.30\n" % (float(coord_list_x[-1]) + 0.30), "0.5\n", "90\n", "H = 0", ) cad_writelines(textblock2) this clearly shows what you’re doing (on the function) and to whom you’re doing it (the text) and it’s still the same loop you were referring to but in a more elegant/scalable way. one caveat I’ll mention here is that strings can be iterated the same as lists and tuples so in order to make your function more robust you should check that and act accordingly like this: def cad_writelines(lines): if isinstance(lines, basestr): lines = [lines] for line in lines: cad_script.write(line) for Python2 or: def cad_writelines(lines): if isinstance(lines, str): lines = [lines] for line in lines: cad_script.write(line) for Python3. this way if you pass a single string it’ll still work as expected instead of calling cad_script.write for each letter, you could also consider this scenario as a bug and raise an exception instead of working around the problem, as you prefer. now another nice refactoring would be to work with those newlines, as I see it you have newlines in every line other than the last, this will be a little tricky to solve in the general case so first let’s see a solution that works with lists and tuples: def cad_writelines(lines): if isinstance(lines, basestr): lines = [lines] if lines: for line in lines[:-1]: cad_script.write('%s\n' % line) cad_script.write(lines[-1]) so now you can use it like this: textblock1 = ( "TEXT", "Justify", "TL", # Top-left justification "0.30,0.30", "0.5", "90", "H = 0", ) cad_writelines(textblock1) and it’ll give you the same result, it works by creating slices the first [:-1] gives you a new list with all the elements except the last one (you can read it as “slice from the beginning up to the element before the last”) and the second slice [-1] just gives you the last element. notice I had to add an if statement to guard against empty lists, this is necessary because while the slices that return lists will simply give you empty ones if there’s no data, the ones that return elements will raise IndexError if they can’t find it. now the problem I mentioned before is that there’s some objects that can be iterated but do not have indexes (and thus can’t be sliced), it is possible to rewrite the logic to work without slices but in this case I think the easiest would be to convert it to a list, given the purpose of it I think it’ll be ok. so here it is: def cad_writelines(lines): if isinstance(lines, basestr): lines = [lines] else: lines = list(lines) if lines: for line in lines[:-1]: cad_script.write('%s\n' % line) cad_script.write(lines[-1]) here I’m either converting a string to a list containing it or convering whatever iterable we received to a new list, notice the difference between the notations on the following snippet: Python 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> mylist = range(3) >>> mylist [0, 1, 2] >>> list(mylist) [0, 1, 2] >>> [mylist] [[0, 1, 2]] we could’ve used tuples instead of lists which are a bit more efficient: def cad_writelines(lines): if isinstance(lines, basestr): lines = (lines,) else: lines = tuple(lines) if lines: for line in lines[:-1]: cad_script.write('%s\n' % line) cad_script.write(lines[-1]) finally there’s another refactoring possible here if we observe all the repeated lines from the input instead of the different ones, I think we can create a “pretty print” function that just takes the data and applies the formatting, something like this: def cad_pp_coord(x, y): textblock = ( "TEXT", "Justify", "TL", # Top-left justification "%.2f,%.2f" % (x, y), "0.5", "90", "H = 0", ) cad_writelines(textblock) and use it simply as: cad_pp_coord(0.3, 0.3) cad_pp_coord(float(coord_list_x[-1]) + 0.3, 0.3) this function uses cad_writelines so why didn’t I merged it and instead created a new separate function? because one solves the general problem and the other works on a specific case and we don’t want to lose generality, we use composition to keep the code general, compact and with simple purposes so it’s easier to understand and reason and we also can easily understand how those pieces fit together when composing them, this is good software design. final version: def cad_writelines(lines): if isinstance(lines, basestr): lines = (lines,) else: lines = tuple(lines) if lines: for line in lines[:-1]: cad_script.write('%s\n' % line) cad_script.write(lines[-1]) def cad_pp_coord(x, y): textblock = ( "TEXT", "Justify", "TL", # Top-left justification "%.2f,%.2f" % (x, y), "0.5", "90", "H = 0", ) cad_writelines(textblock) cad_pp_coord(0.3, 0.3) cad_pp_coord(float(coord_list_x[-1]) + 0.3, 0.3)
We seem to have a tradition of using the Homestar Runner character 'Trogdor the Burninator' to request tag deletion. I thought we could use crisper images, so I made my own. >:) @iglvzx's Trogdor is great, sure, but I always imagined trogdor as having a red skin (yes, I have played peasant's quest, but my artistic style is not to be questioned!). I have also taken on the daunting task of converting our good friend Mr. Troggie into a vector format (SVG), which you can download here: And for those of you who don't have a browser capable of SVG, then here's a PNG render: Do what you will with this file, and hopefully trogdor can burninate more! Request: Burninated: BONUS: The above were made using a Trogdor reference from an unknown person on the internet, which has some inaccuracies. I went to the source (Strongbad), and retraced Trogdor. This is the real Trogdor, in color, in higher definition: This inspired me, so I made some ASCII: { ( ) } "--____ )) " ) -_( \/ ( ___-´ \ | _( 0 o )_- `_ / (__-+`''-------=. _____ )_- \,/-' ,--______: .--´ / ___ __.-----_)_-\ <( V v v ---' _'=-----'/ .´ -- / `-----___. -__ {__ `----'_ | , .´ ___----------' `-_ '-----___----' ( /'--'| -- `--.__---'' ( / ' / | /_ | | ( _-) \ \ '-_) '. \_ -_ \_ `--_ `-__ `-__ `-_ /\ `-_ \ --.__A-/__\ _) | B U R N I N A T E D ! `-- `-/\_-' _/ `-__ ___-' | `-.--' | \ L__ \_.-
I am writing a Django app using mongodb. For a simple GET request I need to get results from database for which I am making a connection in the HTTPRequestHandler. The db operation for the HTTPRequest isn't a heavy operation. Should I close the connection in that handler itself. Her is the code snippet. def search(request): dbConnection = Connection('hostname', int('port-no')) ... made a small query to db. (not a heavy operation) dbConnection.close() return HTTPResponse(result) Is this code doing the suitable job of connecting and closing connections. What I want to know is that is it fast in terms of performance. I want this "search" HTTPRequestHandler to work fast. If this is not the way to go, can someone please explain when and how should we close connections and when to make them persistent in mongo.
[UPDATE 16 Aug 2011]Armin Ronacher has written a nice module called unicode-nazi that provides the Unicode warnings I discuss at the end of this article. Though I can't use Python 3 for any of my projects, it does have a few nice things. One particular behaviour where it improves on Python 2 is forbidding implicit conversions between byte strings and Unicode strings. For example: Python 3.1.2 (release31-maint, Sep 17 2010, 20:34:23) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 'foo' + b'baz' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't convert 'bytes' object to str implicitly If you do this in Python 2, it invokes the default encoding to convert between bytes and unicode, leading to manifold unhappinesses. So in Python 2, the above example looks like: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> u'foo' + b'baz' u'foobaz' This looks OK, but problems lurk just below the surface. The default encoding used in nearly all circumstances is ASCII, and this conversion will blow up if non-ASCII bytes are involved. >>> u'foo' + b'\xe2\x98\x83' Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128) This is only a 3, at best 3.5 on the Russell scale of API misusability. Even worse, this conversion is done for any function implemented in C that expects unicode or bytes and receives the other type. For example, unicode()converts byte strings to unicode strings, and if not given an explicit encoding argument, uses the default encoding. >>> unicode('bob') u'bob' >>> unicode('\xe2\x98\x83') Traceback (most recent call last): File "<stdin>, line 1, in UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128) The right way to do it is call .decode()on the byte string. >>> '\xe2\x98\x83'.decode('utf8') u'\u2603' Just be sure you don't call it on a unicode string by mistake, or you'll get very confused: >>> u'foo'.decode('utf8') u'foo' >>> u'\u2603'.decode('utf8') Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/python2.4/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode character u'\u2603' in position 0: ordinal not in range(128) Hey! That error says "encode", but we asked for it to decode! It turns out that since the utf8 decoder expects bytes as input, but we gave it unicode... Python helpfully tries to convert the unicode characters into bytes, using the default encoding! This is why the first decode call succeeds - all the characters in it can be converted to ASCII bytes. Is there any hope? Since Unicode was added to Python in 2.1, there's been a way to get Python 3's behavior. The sitemodule in the Python standard library sets Python's default encoding on startup. If edited to use "undefined" instead of "ascii", the above examples all fail instead of converting: >>> u'foo' + baz Traceback (most recent call last): File "<stdin>", line 1, in File "/usr/lib/python2.6/encodings/undefined.py", line 22, in decode raise UnicodeError("undefined encoding") UnicodeError: undefined encoding So why can't we switch to this today? Well, for one thing, lots and lots of code depends on this implicit encoding/decoding. Even code in the standard library: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/re.py", line 190, in compile return _compile(pattern, flags) File "/usr/lib/python2.6/re.py", line 243, in _compile p = sre_compile.compile(pattern, flags) File "/usr/lib/python2.6/sre_compile.py", line 506, in compile p = sre_parse.parse(p, flags) File "/usr/lib/python2.6/sre_parse.py", line 672, in parse source = Tokenizer(str) File "/usr/lib/python2.6/sre_parse.py", line 187, in __init__ self.__next() File "/usr/lib/python2.6/sre_parse.py", line 193, in __next if char[0] == "\\": File "/usr/lib/python2.6/encodings/undefined.py", line 22, in decode raise UnicodeError("undefined encoding") UnicodeError: undefined encoding Not to mention loads of existing third-party apps and libraries. So what can we do? Fortunately, Python allows registering new codecs. I've written a slight variation on the ASCII codec, ascii_with_complaints, which preserves the default Python behaviour, but also produces warnings. >>> import re >>> re.compile(u'\N{SNOWMAN}+') /usr/lib/python2.6/sre_parse.py:193: UnicodeWarning: Implicit conversion of str to unicode if char[0] == "\\": /usr/lib/python2.6/sre_parse.py:418: UnicodeWarning: Implicit conversion of str to unicode if this and this[0] not in SPECIAL_CHARS: /usr/lib/python2.6/sre_parse.py:421: UnicodeWarning: Implicit conversion of str to unicode elif this == "[": /usr/lib/python2.6/sre_parse.py:478: UnicodeWarning: Implicit conversion of str to unicode elif this and this[0] in REPEAT_CHARS: /usr/lib/python2.6/sre_parse.py:480: UnicodeWarning: Implicit conversion of str to unicode if this == "?": /usr/lib/python2.6/sre_parse.py:482: UnicodeWarning: Implicit conversion of str to unicode elif this == "*": /usr/lib/python2.6/sre_parse.py:485: UnicodeWarning: Implicit conversion of str to unicode elif this == "+": <_sre.SRE_Pattern object at 0xb76d6f80> Hopefully this will be useful as a tool for ferreting out those Unicode bugs waiting to break yourapplication, as well.
Hada de la Luna [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Bonjour, pour une personne qui a des problèmes avec la luminosité excessive de l'écran, j'aimerais savoir comment régler cela de façon "fixe" qui ne soit pas remise en question à chaque démarrage. En effet, en utilisant : Paramètres système > Luminosité et verrouillage La personne baisse au minimum possible puis au démarrage suivant s'explose ce qui lui reste de vision car l'écran est de nouveau au maximum de luminosité... Et rebelote le réglage... Comment faire pour que cela ne bouge plus "tout seul" ? Dernière modification par Hada de la Luna (Le 22/12/2012, à 00:26) Hada de la Luna :o) Hors ligne DAnGk41 Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" En ce qui me concerne, j'ai cette option sur mon écran, donc je l'ai réglé directement sur celui-ci. promis, je vais penser à me moderniser ==> http://i43.servimg.com/u/f43/11/83/24/66/pb090210.jpg Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Certes mais ce n'est pas possible sur un portable... Hada de la Luna :o) Hors ligne Zakhar Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Si ça n'est pas mémorisé avec les outils graphiques, économie d'énergie et autres, le principe est de trouver la ligne de commande qui fait ça et la rajouter au démarrage de session. Personnellement j'avais un problème de fréquence d'écran (oui j'ai encore un vieil écran cathodique et ces machins là ça a plusieurs fréquences contrairement aux LCD), et je l'ai réglé ainsi par un : /usr/bin/xrandr --output default --mode 1280x1024 à chaque ouverture de session (dans les programmes qui se lancent automatiquement au démarrage de session) Dernière modification par Zakhar (Le 14/10/2012, à 16:27) "A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald) Hors ligne Teromene Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Une technique un peu bourrin mais qui marche est de lancer au démarrage ce script python avec le pourcentage de luminosité en paramètre : #!/usr/bin/python import dbus import getopt , sys bus = dbus.SessionBus() proxy = bus.get_object('org.gnome.SettingsDaemon', '/org/gnome/SettingsDaemon/Power') iface = dbus.\ Interface(proxy, dbus_interface='org.gnome.SettingsDaemon.Power.Screen') opts, args = getopt.getopt(sys.argv[1:], "ho:v", ["help", "output="]) iface.SetPercentage(args[0]) Dernière modification par Teromene (Le 15/10/2012, à 17:22) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Merci ! Par contre, ne sachant pas programmer, je ne comprend même pas ce qu'il faut faire là... Pouvez vous détailler, pas à pas ? Hada de la Luna :o) Hors ligne Teromene Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Il faut enregistrer ce code dans un fichier et tu le rend exécutable ( Clic droit - Propriétés - Onglet Permissions puis cliquer sur "Autoriser l’exécution du fichier comme programme) Puis aller dans Programmes au Démarrage ( Système → Préférences → Applications au démarrage ) faire ajouter puis dans commande faire parcourir sélectionner l'endroit au est installé le programme cliquer sur ok et ajouter le pourcentage à la fin de la ligne de texte qui est apparue. !! Marche uniquement pour gnome et Unity ! ! Hors ligne mikeshlinux Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Il faut enregistrer ce code dans un fichier et tu le rend exécutable ( Clic droit - Propriétés - Onglet Permissions puis cliquer sur "Autoriser l’exécution du fichier comme programme) Puis aller dans Programmes au Démarrage ( Système → Préférences → Applications au démarrage ) faire ajouter puis dans commande faire parcourir sélectionner l'endroit au est installé le programme cliquer sur ok et ajouter le pourcentage à la fin de la ligne de texte qui est apparue. !! Marche uniquement pour gnome et Unity ! ! Bonjour, J'utilise F-lux, mais en ce moment je ne suis pas sur qu'il fonctionne correctement sur ma version 12.04. J'ai le même souci et j'ai reglé le contraste de mon écran. Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Il faut enregistrer ce code dans un fichier et tu le rend exécutable ( Clic droit - Propriétés - Onglet Permissions puis cliquer sur "Autoriser l’exécution du fichier comme programme) Puis aller dans Programmes au Démarrage ( Système → Préférences → Applications au démarrage ) faire ajouter puis dans commande faire parcourir sélectionner l'endroit au est installé le programme cliquer sur ok et ajouter le pourcentage à la fin de la ligne de texte qui est apparue. !! Marche uniquement pour gnome et Unity ! ! Juste pour dire que cela fonctionne parfaitement ! Hada de la Luna :o) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Bonsoir, en fait, il semble que cela ne fonctionne pas systématiquement... Des fois oui et des fois non... Hada de la Luna :o) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" En fait, j'ai l'impression que cela ne fonctionne que lorsque ça redémarre, si on éteint tout et qu'on allume, la luminosité est au max... Hada de la Luna :o) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Des idées alternatives ? De plus le logiciel F-lux n'existe pas dans ma logithèque... Hada de la Luna :o) Hors ligne richardgilbert Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" J'ai résolu mon problème avec: sudo apt-get install xbacklight Aplications au demarrage => ajouter xbacklight =25 Debian, Ubuntu, ElementaryOS, Voyager Linux, Lubuntu & Crunchbang. Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" Merci, j'ai mis ça en place, je te tiendrais au courant si cela fonctionne... Hada de la Luna :o) Hors ligne Major Grubert Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" J'ai ça qui date un peu mais c'est peut-être tjs valable : http://major.grubert.free.fr/index.php? … permanente Pour en savoir plus sur Gnome Shell. . Lenovo Yoga 13 : Windows 8.1 / Ubuntu Gnome 14.04 . Asus x7010 : Ubuntu 14.04 (Gnome Shell) . Asus Eeepc : Xubuntu 14.04 (32b) Hors ligne Hada de la Luna Re : [résolu] 12.04 LTS : régler la luminosité de façon "définitive" xbacklight fonctionne parfaitement et sur plusieurs machines différentes... Hada de la Luna :o) Hors ligne