text
stringlengths 256
65.5k
|
|---|
It always amused me how Apple is capable of producing so quality hardware and has so bright ideas in design, but makes so awful lot of questionable decisions in software. I have quite a number of questions on the usability of Mac OS X, but recent update to Mavericks was the last drop. I spend a lot of time in the text editor and I press delete key several hundreds times a day. The problem is, I have a MacBook Air and delete key is only 3mm apart from power key and I mishit it from time to time.
In the past versions of OSX, pressing power key would result in shutdown dialog, which could be closed by esc key in a wink, no harm done. In Mavericks, power key puts system in sleep, screen goes black, and I need to press power key again, in 2-3 seconds system would boot up back, after that I need another couple of seconds to enter my password — 5 seconds in total, which is quite disturbing when you are concentrated. On top of that, this behavior is not customizable, that effectively cuts MacBook Air with Mavericks usability for programming or writing texts.
Let’s put aside the question why someone would decide to implement such a ridiculous feature — it was implemented, everyone is unhappy, and Apple seems to have more important tasks to do than to fix it back. Unfortunately MacBook is currently my primary instrument, and I could not just put in on the shelf until better times, so naturally I started to seek a solution. There was not much — hilarious Apple support post that advised to press power key for 1.5 seconds to get shutdown dialog and reddit post that advised to change powerd config file.
When you apply suggested fix, screen goes black on power key, but the system doesn’t fall asleep anymore, and you could wake it up momentarily, that saves you several seconds. And if you don’t want to lose another couple of seconds on password input, you can go to Security & Privacy in System Preferences and change password requirement from “Immediately” to “5 seconds”. Now you don’t lose any time, but as for me, screen blackout is still pretty damn disturbing, so I decided to investigate this issue further.
What do we have: if power key is released in less than 1.5 seconds, system falls asleep, if key is pressed for 1.5 seconds, shutdown dialog appears, and if it is pressed for 5+ seconds, system powers off. Third case is usually implemented in hardware, but the first two cases are software in nature and there must be a key handler somewhere, so our goal is to find it and fix it. We also have two leads — one is that powerd config contains “Sleep on power button” option, and another is the shutdown dialog that had to be linked to some piece of code.
I decided to start with shutdown dialog and searched the whole system disk for the phrase from it:
$ sudo find / -type f -print0 | xargs -0 fgrep "Are you sure you want to shut down your" 2>/dev/null
Actually it was a long shot because dialog resources could be encrypted or compressed, or string could be in Unicode, so I was already prepared to dump and examine memory, but luckily grep found matching string in two files:
Binary file /System/Library/CoreServices/loginwindow.app/Contents/Resources/English.lproj/PowerButton.nib matches
Binary file /System/Library/CoreServices/loginwindow.app/Contents/Resources/English.lproj/ShutDown.nib matches
As we can see, both of them are part of loginwindow application, and the name PowerButton.nib looks quite promising, huh? And if we dig it even deeper, we can figure out that PowerButton.nib is a dialog resource for power key, and ShutDown.nib is a dialog resource for Apple -> Shutdown…:
$ cd /System/Library/CoreServices/loginwindow.app/Contents/Resources/English.lproj/
$ fgrep "Restart" PowerButton.nib ShutDown.nib
Binary file PowerButton.nib matches
$ fgrep "If you do nothing, the computer will shut down" PowerButton.nib ShutDown.nib
Binary file ShutDown.nib matches
Anyway, now we had two suspects — loginwindow.app and powerd. /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow and /System/Library/CoreServices/powerd.bundle/powerd, to be precise.
I fired up IDA and loaded powerd in it. Quick look around discovered a whole lot of strings with “sleep” substring, imported IOPMSleepSystemWithOptions from IOKit and several exported symbols with “sleep” substring, but actually nothing that would instantly catch my eye. And no hits for “powerbutton” substring.
So I switched to loginwindow, but it looked all gibberish. That is when I remembered that Apple uses code signing and encryption. So I ended up dumping processes memory anyway. Quick searching on the internet discovered tool named readmem, that did the trick:
$ ps -A | fgrep loginwindow
59 ?? 0:02.31 /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow console
436 ttys000 0:00.00 fgrep loginwindow
$ sudo ./readmem -p 59 -m -o loginwindow.dmp
---------------------------------
Readmem v0.6 - (c) 2012, 2013 fG!
---------------------------------
[DEBUG] Found main binary mach-o image @ 0x10acf7000!
[DEBUG] Executing get_image_size
[DEBUG] Executing dump_binary
[DEBUG] Dumping __TEXT at 10acf7000 with size 99000 (buffer:200000)
[DEBUG] Dumping __DATA at 10ad90000 with size 1d000 (buffer:299000)
[DEBUG] Dumping __CGPreLoginApp at 10adad000 with size 0 (buffer:2b6000)
[DEBUG] Dumping __RESTRICT at 10adad000 with size 0 (buffer:2b6000)
[DEBUG] Dumping __LINKEDIT at 10adad000 with size 15cc0 (buffer:2b6000)
[OK] Full binary dumped to loginwindow.dmp!
Now it looked fine in IDA, so I started to look for “sleep”, “powerbutton”, “power button” in there and almost instantly discovered the following strings:
__cstring:00000001000769F4 0000002F C -[ApplicationManager checkPowerButtonTimeout:]__cstring:0000000100077140 0000002E C -[ApplicationManager handlePowerButtonEvent:]
Each string had several cross-references from the __text section, it was natural to assume that these places were all parts of two ApplicationManager methods, checkPowerButtonTimeout and handlePowerButtonEvent. I created these functions in IDA and now we were getting somewhere. Original strings turned out to be a part of logging system that was enabled by the condition:
lea r12, qword_1000B5F18
test byte ptr [r12+2], 2
jz no_log
Of course, in the original source code it was something like this:
if (qword_1000B5F18 & 0x20000) log(…)
So I went back to readmem and enabled the logging:
$ sudo ./readmem -p 59 -a `python -c "print hex(0x10acf7000 + 0xb5f18)"` -s 8
---------------------------------
Readmem v0.6 - (c) 2012, 2013 fG!
---------------------------------
Memory protection: rw-/rwx
0x10adacf18 00 00 00 00 00 00 00 00 |........|
$ sudo ./readmem -p 59 -a `python -c "print hex(0x10acf7000 + 0xb5f18)"` -s 8 -w -b 20000
---------------------------------
Readmem v0.6 - (c) 2012, 2013 fG!
---------------------------------
-[ Memory before writing... ]-
Memory protection: rw-/rwx
0x10adacf18 00 00 00 00 00 00 00 00 |........|
-[ Memory after writing... ]-
Memory protection: rw-/rwx
0x10adacf18 00 00 02 00 00 00 00 00 |........|
After which I did two tests. First, I held power button for 1.5 seconds. Here is what I got in the syslog via Console.app:
31/12/13 10:24:24,596 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | entered, keyDown:1
31/12/13 10:24:24,597 loginwindow[59]: FaceTimeNotificationCenter | -[FaceTimeNotifictionCenterSupport callIsRinging] | returning:0
31/12/13 10:24:24,597 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | No call is ringing
31/12/13 10:24:24,597 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | NO shield window showing
31/12/13 10:24:24,597 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | power button pressed, start timer
31/12/13 10:24:26,098 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager checkPowerButtonTimeout:] | entered.
31/12/13 10:24:26,099 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager checkPowerButtonTimeout:] | not already handled
31/12/13 10:24:26,099 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager checkPowerButtonTimeout:] | is not terminating apps and power button held for > 1.5 seconds, show powerbutton dialog
31/12/13 10:24:26,903 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | entered, keyDown:0
31/12/13 10:24:26,903 loginwindow[59]: FaceTimeNotificationCenter | -[FaceTimeNotifictionCenterSupport callIsRinging] | returning:0
31/12/13 10:24:26,903 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | No call is ringing
31/12/13 10:24:26,903 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | NO shield window showing
Second, I pressed and released power button momentarily, forcing the system to sleep, and pressed it again in about 5 seconds to wake it up:
31/12/13 10:27:47,682 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | entered, keyDown:1
31/12/13 10:27:47,683 loginwindow[59]: FaceTimeNotificationCenter | -[FaceTimeNotifictionCenterSupport callIsRinging] | returning:0
31/12/13 10:27:47,683 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | No call is ringing
31/12/13 10:27:47,683 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | NO shield window showing
31/12/13 10:27:47,683 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | power button pressed, start timer
31/12/13 10:27:47,842 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | entered, keyDown:0
31/12/13 10:27:47,842 loginwindow[59]: FaceTimeNotificationCenter | -[FaceTimeNotifictionCenterSupport callIsRinging] | returning:0
31/12/13 10:27:47,842 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | No call is ringing
31/12/13 10:27:47,842 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | NO shield window showing
31/12/13 10:27:47,842 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager handlePowerButtonEvent:] | powre button released before 1.5 seconds, sleep the system.
31/12/13 10:27:47,843 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | __RegisterSleepWakeCallback_block_invoke | IOPMScheduleUserActiveChangedNotification received:0
31/12/13 10:27:47,843 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock userActivityChanged:] | entered. isActive:0, shieldWindowShowing:0, lockRequestInProgress:0
31/12/13 10:27:47,847 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock(Private) userIsActiveCheck] | entered.
31/12/13 10:27:47,847 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock(Private) userIsActiveCheck] | returning: 0
31/12/13 10:27:49,185 loginwindow[59]: FaceTimeNotificationCenter | -[ApplicationManager checkPowerButtonTimeout:] | entered.
31/12/13 10:27:52,705 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | __RegisterSleepWakeCallback_block_invoke | IOPMScheduleUserActiveChangedNotification received:1
31/12/13 10:27:52,705 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock userActivityChanged:] | entered. isActive:1, shieldWindowShowing:1, lockRequestInProgress:0
31/12/13 10:27:52,705 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock userActivityChanged:] | user event received, start an unlock with 'active user' as the reason
31/12/13 10:27:52,705 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock(Private) userIsActiveCheck] | entered.
31/12/13 10:27:52,706 loginwindow[59]: FaceTimeNotificationCenter | kLWDBLogScreenLockAndPower | -[LWScreenLock(Private) userIsActiveCheck] | returning: 1
Clearly, our initial suspicions about handlePowerButtonEvent and checkPowerButtonTimeout were right, these are the two functions responsible for the whole mess, so I restored their logic:
When power button is pressed:
if facetime call is ringing, notify facetime about pressed powerbutton and do nothing.
else if shield window (login form for current user) is active, simulate esc keystroke, effectively putting the system to sleep.
else start timer for 1.5 seconds
When power button is released:
if timer didn’t fire yet, put system to sleep.
When timer fires up:
if button is still pressed, and the system is not in kiosk and the system is not powering down, show dialog.
Everything seems to be pretty straightforward. Now, we want to get rid of unwanted unacknowledged sleeping and we want to restore original behavior of showing shutdown options dialog. First task is pretty easy — basically placing mov rax, 1; retn in the beginning of handlePowerButtonEvent will do the trick. Second task is somewhat more ambiguous — we can either call ApplicationManager::postPowerButtonDialogRequest from handlePowerButtonEvent when keyDown is 1, or we can reduce timeout from 1.5 seconds to 0.001 seconds, or we can just replace handlePowerButtonEvent call with postPowerButtonDialogRequest and hope that multiple calls won’t break anything.
I personally consider second way is the least destructive — all we do is change a constant, no intervention in the program logic, so I focused on it.
Timer setup code looks like this:
10004D07A 48 8B 35 37 D6 05 00 mov rsi, cs:off_1000AA6B8
10004D081 48 8B 0D 00 EF 05 00 mov rcx, cs:off_1000ABF88
10004D088 48 8B 3D C1 FF 05 00 mov rdi, cs:off_1000AD050
10004D08F F2 0F 10 05 A1 81 04 00 movsd xmm0, cs:qword_100095238
10004D097 4C 89 FA mov rdx, r15
10004D09A 45 31 C0 xor r8d, r8d
10004D09D 45 31 C9 xor r9d, r9d
10004D0A0 FF 15 52 C4 04 00 call cs:_objc_msgSend_ptr
10004D0A6 49 89 04 1F mov [r15+rbx], rax
Timer constant is loaded into xmm0 in the fourth line. The naïve approach is to simply update qword_100095238 value, but unfortunately that constant is used outside the handlePowerButtonEvent, so we could unwillingly alter behavior of other parts of loginwindow if mess with it. Therefore we would rather update the instruction so it points not to 100095238, but somewhere else.
We don’t see any direct reference to address 100095238 in the instruction, but it is common for x86 processor family to incorporate relative addresses, where effective address is calculated as next instruction address + offset. 100095238 - 10004D097 gives us 481A1, which is exactly the value of second dword of movsd instruction. So now our entire job is to find another, smaller value, recalculate offset to it, and replace old offset with it.
We could of course look for small values all over the memory, but that wouldn’t be exactly wise — most memory areas change from time to time, and if we find some small value now, it could easily change to enormous value or even be not valid memory place in the future. So the correct approach here is to look for small values only in the code segment of loginwindow, which is guaranteed to be intact in normal operation. I wrote a simple Python script for that matter:
def find_small_double(image, from, to=None):
for i in xrange(0x18, 0x74):
pattern = chr(i) + '\x3f'
pos = image.find(pattern, from)
if pos != -1:
return (pos-6, struct.unpack('d',image[pos-6:pos+2])[0])
return None
It in fact finds a lot of byte sequences that could be interpret as small double values around the __text section, so I just updated the offset to the first of them:
$ sudo ./readmem -p 59 -a `python -c "print hex(0x10acf7000 + 0x4D093)"` -s 4 -w -b 8533
---------------------------------
Readmem v0.6 - (c) 2012, 2013 fG!
---------------------------------
-[ Memory before writing... ]-
Memory protection: rw-/rwx
0x10ad44093 A1 81 04 00 |....|
-[ Memory after writing... ]-
Memory protection: rw-/rwx
0x10ad44093 33 85 00 00 |3...|
… and my power button was fixed.
Understanding that it is quite a big deal to fix it by hand all the time, I wrote a simple utility based on the readmem that tries to fix loginwindow process. Download it, build it (you will likely need XCode), run it. Alternatively, you can download and run binary version.
Of course, it has several limitations:
I have only tested it on my loginwindowversion, so it is not guaranteed to work on any other machine. Please report on how it works for you.
The utility works only on 64bit platform but that shouldn’t be a problem, considering Mavericks only runs on 64bit.
The utility must be run from superuser. Probably you could chmod +sit and autolaunch it on system start or in cron, I haven’t tested it.
The fix will only work until system is rebooted or user is logged out. The fix will not work for any newly logged it users.
But still it should be handy. Have a happy coding in the New Year!
Donations are welcome at 1G1RKjYazp8TjxKTC6YpWADZzejQaiCeEc or LKqD7vAfWkfDTSzta1YUdGqWkBj1RMf654.
P.S.: Oh but wait, what about powerd and “Sleep on power button” config option? Nothing, actually. All the sleeping-waking magic is performed by IOKit, and powerd merely loads the config and applies it to IOKit. I guess just another option would be to modify IOKit and disable screen blackout, but I sticked to loginwindow fix.
|
My function is made to get the area of any arbitrary triangle.
Here is the way that I know works
def areaOfTriangle(vertices):
x1 = vertices[0][0]
y1 = vertices[0][1]
x2 = vertices[1][0]
y2 = vertices[1][1]
x3 = vertices[2][0]
y3 = vertices[2][1]
area = (1.0/2.0)*(x2*y3 - x3*y2 - x1*y3 + x3*y1 + x1*y2 - x2*y1)
return area
However, I think this is crap so here's what I had as a sketched out thought,
def areaOfTriangle(vertices):
coord1 = vertices[0]
coord2 = vertices[1]
coord3 = vertices[2]
for x1,y1 in coord1:
for x2, y2 in coord2:
for x3, y3 in coord3:
area = (1.0/2.0)*(x2*y3 - x3*y2 - x1*y3 + x3*y1 + x1*y2 - x2*y1)
return area
However, this apparently doesn't play too nice with lists. I thought this would work in the way that once can get keys and values from dictionaries...but lists don't have the iteritems() method. Then I thought about converting the lists into dictionaries, but the keys are unique in dicts and hence they only pop up once....which would make my function not work properly.
|
vince06fr
Re : Nettoyage dans les noyaux (kernel)
Umuntu : Si tu veux prendre le temps de traduire ce script, surtout ne te gêne pas comme tout est "hardcodé" dans le script, la seule chose à faire est... De modifier l'ensemble des textes en français présents dans le script pour les mettre en anglais.
Une fois le script modifié, et après avoir vérifié que cette nouvelle version ne pose pas de de soucis particulier, je ferai un nouveau paquet "anglais".
Si quelqu'un connait un moyen simple d'inclure les deux langues dans le même script je suis preneur, mais je doute que ce soit possible facilement. (Et je préfère avoir deux versions faciles à lire et à mettre à jour plutôt qu'une seule plus compliquée à modifier)
Il faut utiliser des fichier .po avec la bibliotheque gettext
http://fr.wikipedia.org/wiki/Gettext
http://schplurtz.free.fr/wiki/envrac/bash-international
Dernière modification par vince06fr (Le 06/11/2012, à 21:53)
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
frenchy : mince... ca ne devrait plus faire ca... je regarderai ca des que possible (pas avant quelques jours).
Vince : merci pour les infos.
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Je viens de regarder et de trouver le soucis. Je ne sais pas du tout comment j'ai fait mon coup, mais la correction apportée (post 199) n'était pas présente dans la version 3.2 en téléchargement C'est maintenant réparé.
Frenchy82 : Peut tu re-télécharger le paquet depuis mon blog, refaire l'installation, et refaire un essai ?
Désolé aussi d'avoir mis autant de temps pour répondre alors qu'il s'agissait la d'un problème important, mais il se trouve que j'étais un peu occupé ces derniers temps
Dernière modification par Hoper (Le 16/11/2012, à 17:38)
Hors ligne
frenchy82
Re : Nettoyage dans les noyaux (kernel)
C'est tout bon maintenant
Merci pour ton attention
Hors ligne
loubrix
Re : Nettoyage dans les noyaux (kernel)
Si quelqu'un connait un moyen simple d'inclure les deux langues dans le même script je suis preneur
perso, je remplace les textes à afficher par une variable, et je fais un case, comme ça:
case "$LANG" in
fr*)
TITLE="Mon titre"
_yes="Oui"
_no="Non"
_cancel="Annuler"
_continue_label="Continuer"
_exit="Quitter"
;;
*)
TITLE="My title"
_yes="Yes"
_no="No"
_cancel="Cancel"
_continue_label="Continue"
_exit="Exit"
;;
esac
et les variables dans le script (ici dans une déclaration qui m'évite de repasser toujours les mêmes options à dialog à chaque utilisation):
DIALOG() {
dialog --title "$TITLE" \
--yes-label "$_yes" --no-label "$_no" --cancel-label "$_cancel" --exit-label "$_exit" \
"$@"
}
si ça peut aider
PS: sinon, on a soumis une demande qui a le même but que ton script (mais automatisé) sur Launchpad
Dernière modification par loubrix (Le 16/11/2012, à 19:21)
Asus X50VL - Ubuntu 12.04 AMD64
HP G62 - Ubuntu 12.10 AMD64
Fujitsu-Siemens Amilo EL - Lubuntu 12.04 i686
Manjaro, une rolling pour débutants
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonjour...
Je ne sais pas si quelqu'un a testé kclean sur Ubuntu 13.04 ...
Comme j'ai installé une version de test, et qu'il y avait 3 versions de noyau dans l'Os, après les mises à jour ... J'ai fait le test ...
Çà marche !! ....
Le contenu ,du menu Grub de la 13.04, maintenant ...
bernard@bernard-GA-7VAX:~$ grep menuentry /boot/grub/grub.cfg
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
menuentry_id_option=""
export menuentry_id_option
menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
submenu 'Options avancées pour Ubuntu' $menuentry_id_option 'gnulinux-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.7.0-5-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.7.0-5-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.7.0-5-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.7.0-5-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.7.0-4-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.7.0-4-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.7.0-4-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.7.0-4-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry "Memory test (memtest86+)" {
menuentry "Memory test (memtest86+, serial console 115200)" {
@+. Babdu89 .
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Merci pour ce test
Hors ligne
cptnflam
Re : Nettoyage dans les noyaux (kernel)
J'ai installé la dernière version en deb sur debian sid + cinnamon
- Pour pouvoir le lancer depuis le menu, il faut éditer le fichier sudoers.
- Sinon le lancer dans une console root avec kclean --gui
Super idée ce script.
Merci.
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Attention en l'utilisant sous debian, il me semble que le nom de certains paquets différent... Vérifie bien à chaque fois ce qu'il te propose de supprimer Sinon, ce script est prévu pour fonctionner aussi bien (si ce n'est mieux) en ligne de commande.
Dernière modification par Hoper (Le 12/12/2012, à 12:08)
Hors ligne
cptnflam
Re : Nettoyage dans les noyaux (kernel)
Merci pour le conseil. Je vais privilégier la ligne de commande.
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonjour...
Kclean tester aujourd'hui avec Xubuntu 12.10 ...
Dans les maj proposées,il y a un changement de version de noyau
3.5.0-19 ==> 3.5.0-21 ...
Je fais le maj du noyau ...
J'installe Kclean depuis le lien du post#1 de ce sujet ...
Je lance Kclean et voila ce qui m'est proposé de désinstaller ; Ah, je n'ai pas de noyau 3.0.5-21 dans ce que propose Kclean ...
Je vérifie le contenu du menu grub de Xubuntu 12.10 ...
Le noyau 3.0.5-21 est bien installé et présent au menu grub ...
Bon je fais une maj de Grub depuis Xubuntu 12.10 ...J'ai bien le noyau 3.0.5-21 en premier ...
Je relance Kclean ; toujours pareil, pas de noyau 3.0.5-21 , il n'est pas vu par Kclean ...
J'avais déjà eu des soucis avec une histoire de maj de grub ... ,J'en ai parlé quelques posts plus haut... Çà étonnait les intervenants ...
Ah... Au fait... j'ai plusieurs Linux installésJe lance mon Xubuntu 12.10, depuis le menu grub de l'os qui fait démarrer ma machine ... Ubuntu 12.04 ...
Bon ,je relance ma machine...
Dans le menu grub de la 12.04, le noyau 3.0.5-21 de Xubuntu 12.10, nest pas dans le menu grub de la 12.04 ...
Je lance la 12.04, je fais une maj de grub ... le noyau 3.0.5-21 est dans l'entrée du menu grub qui lance ma machine...
Je relance Xubuntu 12.10 depuis le menu grub de la 12.04 ...
Je lance Kclean et ...
Voila ,j'ai le noyau 3.0.5-21 cette foi-ci ...
Que faut'il en déduire? ...
Pourquoi suis-je obligé de mettre grub à jour, sur l'Os qui fait démarrer la machine, et qui n'est pas Xubuntu 12.10 ?...
Important çà, dans le cadre d'un multiboot, dont la machine est démarrée par un autre Os linux que celui où on veut uriliser Kclean...
Je résume ce qui c'est passé sur ma machine...
Multi boot lancé sur Ubuntu 12.04 ...
Lancement Xubuntu12.10 depuis le menu grub de Ubuntu12.04...
Maj de Xubuntu 12.10 proposés avec changement de noyau 3.0.5-19 ==>3.0.5-21 ...
Installation de Kclean v3.2 ...
Lancement de Kclean ... pas de noyau 3.0.5-21 proposé dans la fenêtre de Kclean ,propose 3.0.5-19 ...
Maj de grub de Xubuntu 12.10 ...
Pas de changement dands la fenêtre de Kclean ... ==> 3.0.5-19 ...
Je comprend qu'il faut que je mette grub à jour sur l'Os qui lance la machine, ce n'est pas Xubuntu 12.10 ...
Obligé de relancer la machine sur Ubuntu 12.04 pour mettre à jour grub de la 12.04 ... il n'y a pas de noyau 3.0.5-21 dans le menu grub ,concernant Xubuntu 12.10 ...
Maj de grub Ubuntu 12.04 faite, le noyau 3.0.5-21 de Xubuntu 12.10, est au menu grub de Ubuntu 12.04 ...
Lancement de la machine sur Xubuntu 12.10 , depuis le menu grub de Ubuntu 12.04 ...
Kclean voit maintenant le noyau 3.0.5-21 sur Xubntu12.10 ...Je peux nettoyer...
Donc... Dans le cadre d'un multiboot Linux ..L'utilisateur qui ne comprend pas ce qui se passe... Qui ne fait pas la maj de grub de l'Os qui lance la machine ,il va chercher longtemps,avant d'arriver à utiliser Kclean sur un autre Os ,lancé depuis le menu grub de l'Os qui lance la machine ...
Çà n'est expliqué nul part çà ... (À part ici,maintenant) ...
Je teste le nettoyage...
Édit...
Çà marche... Voici le menu grub de Xubuntu 12.010 ,maintenant....
Reste les noyaux 3.0.5-19 et 3.0.5-21 ...
@+. Babdu89 .
Dernière modification par Babdu89 (Le 22/12/2012, à 15:47)
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Maj de Xubuntu 12.10 proposés avec changement de noyau 3.0.5-19 ==>3.0.5-21 ...
Installation de Kclean v3.2 ...
Lancement de Kclean ... pas de noyau 3.0.5-21 proposé dans la fenêtre de Kclean
De ce que je comprend, j'ai l'impression qu'il n'y a aucun problème. kclean ne te proposera jamais de supprimer le noyau que tu es en train d'utiliser. Donc effectivement, il ne le "voit pas", et c'est tout à fait normal !
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonsoir...
@ Hoper ...
De ce que je comprend, j'ai l'impression qu'il n'y a aucun problème. kclean ne te proposera jamais de supprimer le noyau que tu es en train d'utiliser. Donc effectivement, il ne le "voit pas", et c'est tout à fait normal !
OK!!!...
La prochaine foi que çà arrive, je fais un reboot sur l'OS concerné , durant les manips pour voir ce qui se passe ...
Merci ...
Bonnes fêtes à tous...
@+. Babdu89 .
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonjour...
Un peu de pub, et un test + démonstration de l'utilisation du script en mode graphique sous la 13.04 encore ici ...
Bonne continuation ...
@+. Babdu89 .
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Merci pour le test en 13.04
Hors ligne
Sylll2o
Re : Nettoyage dans les noyaux (kernel)
Bonjour,
Un grand MERCI pour ce script Bravo Hoper
Parfait pour les allergiques du terminal !!!
++
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
C'est la qu'on voit qu'il en faut pour tous les gouts. J'ai déjà reçu des remerciements de gens qui trouvaient ce script très pratique à utiliser en ligne de commande... comme quoi
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonjour...
@ Hoper ...
Je reviens sur un souci déjà évoqué dans ce sujet ... Avec plus de détails des manips ...
Le cadre d'un multi-boot , un Os dont veut nettoyer le nombre de noyaux avec ton script, après une maj système, et ajout d'un nouveau noyau ...
Os à nettoyé ,installé en /sda11
Os qui démarre la machine installé en /sda13
Mais ...
Le fait que l'Os a "nettoyer" (/sda11) , soit démarré par le menu Grub non à jour, d'un autre Os (/sda13) que celui où on veut utiliser ton script (sur /sda11)...(forcément,il ne peut pas y avoir d'entrée du dernier noyau, de l'Os à nettoyé (/sda11), au menu Grub de L'Os qui lance l'Os à nettoyé(/sda13) ) ...
Donc l'Os à nettoyé (/sda11) est démarré par l'avant dernier noyau ,dans le menu Grub de l'Os (/sda13) qui lance la machine...
Tant que la maj de Grub de l'Os (/sda13) qui la la machine ne sera pas faite, le dernier noyau de l'Os à nettoyé (en /sda11) ne figure dans le menu Grub de l'Os (/sda13) qui lance la machine ...
Bon , le souci, c'est que bien que le dernier noyau soit installé dans l'Os (/sda11) voici ce que donne les manips ...
Maj système avec changement de noyau sur l'Os en (/sda11) ... 3.8.0-5 vers le 3.8.0-6 ...
Maj de Grub (déjà faite' de cet Os (/sda11) ...
Lancement du script de Hoper ...
Le script ,nous signal que nous somme sur le noyau 3.8.0-5 , normal, et nous propose de désinstaller le dernier noyau installé 3.8.0-6 ?!?!?!.....
Pas normal çà ... Çà tient au fait que l'on est lancé depuis le menu Grub de l'Os en (/sda13) sur le noyau 3.8.0-5 ...
Voila le contenu de l'entrée au menu Grub de l'Os (/sda11) qui est lancer depuis le nenu Grub de l'Os (/sda13) ...
menuentry "Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-5-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-5-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-4-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-4-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-4-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-4-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-1-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-1-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-1-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-1-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
Il n'y a pas de noyau 3.8.0-6 ...
Je reboot la machine sur l'Os qui est en /sda13 ,pour faire la maj de Grub de cet Os ...
Voila ce que donne le contenu, pour la même entrée au menu Gtub ,maintenant ...
menuentry "Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-6-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-6-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-6-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-6-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-5-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-5-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-4-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-4-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-4-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-4-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-1-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-1-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-1-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-1-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
Çà y est ,nous avons maintenant le noyau 3.8.0-6 dans le menu Grub de l'Os en /sda13 , pour lancer l'Os en /sda11 ...
Je relance la machine, et je démarre l'Os en /sda11 (l'Os à nettoyer) ...
Je lance le script, et cette foi-ci, c'est conforme à ce que j'en attends ...
Le script me dit bien que je suis sur le noyau 3.8.0-6 ...
Et propose les bonnes versions au nettoyage ...
Je valide pour faire la manip ...
Voila c'est fini ...
Le contenu du menu Grub , depuis l'Os nettoyé en /sda11 ...
menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
submenu 'Options avancées pour Ubuntu' $menuentry_id_option 'gnulinux-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.8.0-6-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-6-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.8.0-6-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-6-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.8.0-5-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
menuentry 'Ubuntu, avec Linux 3.8.0-5-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc' {
Après reboot sur l'Os qui lance la machine /sda13 ...
Mise à jour à nouveau de Grub de cet Os ...
Le contenu de l'entrée de l'Os /sda11, au menu Grub de L'Os qui lance la machine /sda13 ...
menuentry "Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-6-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-6-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-6-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-6-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-5-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-advanced-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.8.0-5-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.8.0-5-generic-recovery-9236aa85-fab4-459f-95fe-e5ca7e172cfc (on /dev/sda11)" --class gnu-linux --class gnu --class os {
C'est bon ,j'ai bien les deux denier noyaux .... 3.8.0-5 et 3.8.0-6 ....
Alors question ...
Quelle va être la réaction d'un utilisateur, qui n'aura pas pigé tout ceci , en utilisant le script ??... Et qui va se retrouver devant ceci???..
Dans le cas d'un multi-boot , avec démarrage depuis le menu Grub d'un Os (/sda11) autre que celui sur lequel on veut utiliser le script ...
N'y a-t'il pas moyen de mettre un message d'alerte, incitant l'utilisateur à faire la mise à jour de Grub de l'Os (/sda13) qui lance la machine ?? ...
Voir essayer de modifier ton script, pour qu'il prenne en compte cette situation ... Et fasse lui même la mise à jour de Grub sur l'Os qui lance la machine, avant de continuer le nettoyage sur l'Os où on utilise le script ...Ceci en transparent pour l'utilisateur ...
Si tu entrevois une possibilité, tu me fais signe ,pour aider à tester ...
Voila , je pense que ces manips devraient aider à mieux cerner le problème ....
@+. Babdu89 .
Dernière modification par Babdu89 (Le 15/02/2013, à 21:13)
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Babdu89, je suis vraiment désolé parce que tu fais clairement de gros efforts, avec des postes très long, et des copies d'écran et tout et tout.
Mais cela fait deux fois que je ne comprend RIEN à tes messages.
Je ne sais pas, ça vient peut etre de moi, mais j'ai beau essayé de me concentrer sur ce que tu raconte, au bout de quelques lignes je suis largué. Pour te dire la vérité, je n'arrive même pas à lire tes posts jusqu'au bout.
Globalement, que tu sois en multiboot ou pas, le script s'en fiche complétement. Il fonctionne de façon très simple, en listant les noyaux installés avec la commande dpkg -l (le contenu de grub n'a rien à voir la dedans) trouve ceux qu'il faut supprimer et les supprime toujours avec dpkg. Je crois vraiment que tu te fais des noeuds dans le cerveau pour rien.
Le script ,nous signal que nous somme sur le noyau 3.8.0-5 , normal, et nous propose de désinstaller le dernier noyau installé 3.5.8-6 ?!?!?!.....
Pas normal çà ..
Et pourquoi pas normal ? 3.5.8-6 est bien inférieur à 3.8.0-5 non ? Il est donc bien normal qu'il te propose de le supprimer. Encore une fois, le "contenu de grub" (ce qui ne veut pas dire grand chose) n'a rien à voir dans tous ça. Je pense que tu te imagine des choses (à propos du fonctionement de ce script) qui sont très loin de la réalité.
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonsoir...
@ Hoper ...
Déjà excuse moi "d'écorcher " ton pseudo Hopper ,pour Hoper ...
Tu dis ...
Le script ,nous signal que nous somme sur le noyau 3.8.0-5 , normal, et nous propose de désinstaller le dernier noyau installé 3.5.8-6 ?!?!?!.....
Pas normal çà ..
Et pourquoi pas normal ? 3.5.8-6 est bien inférieur à 3.8.0-5 non ? Il est donc bien normal qu'il te propose de le supprimer. Encore une fois, le "contenu de grub" (ce qui ne veut pas dire grand chose) n'a rien à voir dans tous ça. Je pense que tu te imagine des choses (à propos du fonctionement de ce script) qui sont très loin de la réalité.
Ou là là...
Je me suis trompé en reportant les n° de versions de noyau ... 3.8.0-5 et 3.8.0-6 ...
Voir ce qui est indiqué entre les balises code ...
J'ai corrigé mon post, et çà change la donne ...
@+. Babdu89 .
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Ok, la dernière copie d'écran semble indiquer que dans des cas VRAIMENT tordus il y a peut être un soucis... Je vais donc sérieusement ré-essayer de comprendre ton post.
(laisse moi un peu de temps, parce que je ne pourrai pas me pencher la dessus ce week end...)
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonsoir ...
@ Hoper ...
Ok, la dernière copie d'écran semble indiquer que
dans des cas VRAIMENT tordusil y a peut être un soucis...
Je re-formule en plus simple ...
Une machine en multi-boot Os Linux, (au moins deux ; Linux1 et Linux2) ...
La machine boot sur le Linux2 ...( donc menu Grub du Linux2)
Sur le Linux1...
La maj système, propose un changement de version de noyau ...
On en profite, pour utilise ton script ...
Et,on se retrouve dans la situation que je décris ...
Je ne trouve pas que ce soit un cas vraiment tordu ... On doit voir çà sur toutes les machines en multi-boot Os Linux ...
Prends ton temps pour étudier, voir tester chez toi si tu as la possibilité ... C'est de ma part juste une constatation que je rapporte ...
Ce qui n'enlève rien à la qualité du script que tu as créé ...
Juste que dans le cas d'espèce, çà engendre des manips non prévues, du Grub de L'Os (Linux2) qui lance la machine, pour pouvoir utiliser ton script correctement sur le Linux1 ...
Bonne continuation ...
@+. Babdu89 .
Dernière modification par Babdu89 (Le 15/02/2013, à 21:17)
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Une machine en multi-boot Os Linux, (au moins deux ; Linux1 et Linux2) ...
Plusieurs linux sur une même machine, personnellement je trouve ça déjà bien tordu Si c'est pour faire des essais de distribution etc, il semble plus logique de passer par des machines virtuelles mais bon.
La machine boot sur le Linux2 ...( donc menu Grub du Linux2)
Et alors la, on entre dans le vraiment bizarre. Le role de grub est bien de gérer l’amorçage de l'OS de ton choix. Pourquoi mettre plusieurs grub différents !? Pour moi, le gestionaire de boot, que ce soit grub, ou lilo, ou gag, ou ce que tu veux, ça se met sur le MBR et il y en a qu'un, pas deux.
On doit voir çà sur toutes les machines en multi-boot Os Linux ...
Alors la non, je peux me tromper bien sur, mais vraiment je crois pas
Il n'en reste pas moins que, comme indiqué plus haut, je vais essayer de de comprendre ce qui se passe...
Hors ligne
Babdu89
Re : Nettoyage dans les noyaux (kernel)
Bonsoir ...
@ Hoper ...
Je fais les tests, en éditant un fichier texte de ce que je fais ... Et malheureusement je ne me relis pas, avant de faire des Copier/coller, pour poster...
Il faut m'en excuser, et surtout continuerà me "ramasser" car je suis impardonnable ...
Quand on fait des tests il faut faire plus attention à ce que l'on fait et dit ...Mea-culpa ...
La machine boot sur le Linux2 ...( donc menu Grub du Linux2)
Non , je voulais écrire ...
La machine boot sur le Linux2 ...(
depuis le menu Grub du Linux1)
Çà change encore bien les choses ...
Tu dis ...
Il n'en reste pas moins que, comme indiqué plus haut, je vais essayer de de comprendre ce qui se passe...
Oui ,car voici un deuxième test, fait sur un autre hdd de test, avec un multi-boot Os Linux ...
Hdd multi-boot Os Linux ...Machine bootant sur le Linux1Os qui va être nettoyé avec le script ; Linux2 ,lancé à partir du menu Grub du Linux1 ...
Menu Grub Linux2 avant maj système ... Dernier noyau 3.2.0-35 ...
bernard@bernard-System-Product-Name:~$ grep menuentry /boot/grub/grub.cfg
menuentry 'Ubuntu, with Linux 3.2.0-35-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-35-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-34-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-34-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-33-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-33-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-32-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-32-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-31-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-31-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-30-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, with Linux 3.2.0-30-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
Maj système sur le Linux2 proposées avec changement de version de noyau 3.2.0-35 vers 3.2.0-38 ...
Je fais les maj, je reboot la machine comme demandé, a partir du menu Grub du Linux1, puisque la machine boot comme çà ...
J'ai bien le noyau 3.2.0.38 installé sur Linux2...
bernard@bernard-System-Product-Name:~$ grep menuentry /boot/grub/grub.cfg
menuentry 'Ubuntu, avec Linux 3.2.0-38-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-38-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-35-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-35-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-34-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-34-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-33-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-33-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-32-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-32-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-31-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-31-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-30-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-30-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
Je lance le script ,pour faire du nettoyage ...
Alors, que faire???? ....
L'utilisateur qui ne sait pas, va te dire que ton script ne marche pas ...
Comme nous savons maintenant qu'il faut mettre à jour le menu Grub de l'Os (Linux1),qui lance actuellement le Linux2 sur le noyau 3.2.0-35,
pour que le Linux2 soit lancé sur le noyau 3.2.0-38...
C'est ce que je vais faire ...
Reboot de la machine sur le Linux1 ...
Voila l'entrée du Linux2 au menu Grub du Linux1 ...
menuentry "Ubuntu, with Linux 3.2.0-35-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-35-generic (recovery mode) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-34-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-34-generic (recovery mode) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-33-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-33-generic (recovery mode) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-32-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-32-generic (recovery mode) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-31-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-31-generic (recovery mode) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-30-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, with Linux 3.2.0-30-generic (recovery mode) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
Le dernier noyau présent pour le Linux1 3.2.0-35 .... Pas de noyau 3.2.0-38 ...
Je fais la maj de Grub du Linux1 ...
J'ai le noyau 3.2.0-38 dans les entrées du Linux2, au menu Grub du Linux1, je vais pouvoir lancer le Linux2 sur le noyau 3.2.0-38 ...
menuentry "Ubuntu, avec Linux 3.2.0-38-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-38-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-35-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-35-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-34-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-34-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-33-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-33-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-32-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-32-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-31-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-31-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-30-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-30-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
Je vais pouvoir faire mon nettoyage des noyaux en trop du Linux2 maintenant ...
Lancement de Kclean à partir du Linux2 ... Ahhhh !!! ... ÇÀ va mieux ...
Je valide l'opération ...
C'est fini ...
Voila les entrées au menu Grub du Linux2 maintenant ... Deux derniers noyaux ...
bernard@bernard-System-Product-Name:~$ grep menuentry /boot/grub/grub.cfg
menuentry 'Ubuntu, avec Linux 3.2.0-38-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-38-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-35-generic' --class ubuntu --class gnu-linux --class gnu --class os {
menuentry 'Ubuntu, avec Linux 3.2.0-35-generic (mode de dépannage)' --class ubuntu --class gnu-linux --class gnu --class os {
Pour être complet, il n'y a plus-qu'à rebooter la machine sur le Linux1, mettre faire une maj de Grub ,pour n'avoir que deux noyaux dans les entrées du Linux2, au menu Grub du Linux1 ...
Entrées du Linux2 au menu Grub du Linux1 ... Avant maj de Grub ...
menuentry "Ubuntu, avec Linux 3.2.0-38-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-38-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-35-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-35-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-34-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-34-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-33-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-33-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-32-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-32-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-31-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-31-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-30-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-30-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
Entrées du Linux2 au menu Grub du Linux1 ... Après maj de Grub ...
menuentry "Ubuntu, avec Linux 3.2.0-38-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-38-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-35-generic (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
menuentry "Ubuntu, avec Linux 3.2.0-35-generic (mode de dépannage) (on /dev/sdb9)" --class gnu-linux --class gnu --class os {
Voila c'est fini, il n'y a plus-qu'à lancer le Linux2 sur le noyau 3.2.0-38 ...
Alors, des trucs tordus comme çà, j'en ai, encore sur 2 hdd usb de tests, et sur ma machine de la campagne, 4 hdd internes avec les mêmes situations ...
Des utilisateurs , qui n'aiment pas la ligne de commande , comme moi, mais qui se plaisent à tester tou ce qui leur tombe sous la main ,il y en a plein le forum, je t'assure...
Donc Un Grand Merci , a des contributeurs comme toi et beaucoup, d'autres qui mettez votre "travail" à disposition ,pour des "gus" comme moi ...
Alors si en retour ,nous pouvons aider lorsque c'est possible, il ne faut pas s'en privé ...
Mais ,je le reconnais humblement, ne pas dire trop n'importe quoi ... Je ferais plus attention ,les prochaines fois ...
Excuse ,encore la longueur du post, mais je pense pas que l'on puisse expliquer ceci en deux ou trois lignes ...
Tu dis...
Plusieurs linux sur une même machine, personnellement je trouve ça déjà bien tordu smile
Si c'est pour faire des essais de distribution etc, il semble plus logique de passer par des machines virtuelles mais bon.
Pas sûr que l'on voit tous ce qui ne va pas en VM ... En installé, tout ce que l'on teste ...Çà marche, ou pas ... Bon ,il faut du temps ,et de la passion ...
Au fait, qu'est-ce qui peut être fait pour régler le souci??? ...
@+. Babdu89 .
Dernière modification par Babdu89 (Le 16/02/2013, à 00:28)
J'ai découvert Ubuntu avec la 07.10.... Et alors?!... Depuis je regarde de temps en temps si Windows marche toujours....
Hors ligne
Hoper
Re : Nettoyage dans les noyaux (kernel)
Tu n'a pas répondu à l'une de mes remarques.
Comment se fait il que tu ai plusieurs grub !?
Pour le reste, on ne s'en sortira pas par forum interposé. Je te propose qu'on en discute par téléphone (je peux te filer mon téléphone, ou le contraire, comme ça t'arrange). Évidement pas ce soir l'appel hein
Hors ligne
|
I am new to GAE Blostore.... I am trying to display a url of the image uploaded via GAE, but I am having difficulties....any help is appreciated.
1) The code below displays the key in hex format, which I am not sure why it does that.
2) Furthermore, how do I get/create an URL to the image with the hex value key?
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import blobstore
from google.appengine.ext.webapp import blobstore_handlers
from google.appengine.ext import db
from google.appengine.ext import blobstore
#import os
import urllib
class UserPhoto(db.Model):
user = db.StringProperty()
user1 = db.EmailProperty()
blob_key = blobstore.BlobReferenceProperty()# blobstore.BlobKey #
class MainPage(webapp.RequestHandler):
def get(self):
user = users.get_current_user()
upload_url = blobstore.create_upload_url('/upload')
existing_data = "<br>"
if user:
#user_photo = UserPhoto(user=users.get_current_user().email( )
data = UserPhoto.all()
results = data.filter('user1 =',user.email())
rmvStr = len("<__main__.UserPhoto object at ")
for blob in results:
existing_data += "Blob item key # : "+ str(blob)[rmvStr:len(str(blob))-1] +" <br>"
#existing_data += "Blob item key # : "+ str(blob) +" <br>"
#self.response.out.write( "value of blob is: " + str(blob))
self.response.out.write(
'Hello %s <a href="%s">Sign out</a><br>Is administrator: %s' %
(user.nickname(), users.create_logout_url("/"), users.is_current_user_admin())
+'<form action="%s" method="POST" enctype="multipart/form-data">' % upload_url+
"""Upload File: <input type="file" name="file"><br> <input type="submit"
name="submit" value="Submit"> </form>
<br>"""+existing_data
)
else:
self.redirect(users.create_login_url(self.request.uri))
class UploadHandler(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
blob_info = upload_files[0]
user = users.get_current_user()
if user:
data = UserPhoto()
data.user1 = user.email()
data.blob_key = blob_info.key()
data.put()
#self.redirect('/serve/%s' % blob_info.key())
self.redirect('/')
class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, resource):
resource = str(urllib.unquote(resource))
blob_info = blobstore.BlobInfo.get(resource)
self.send_blob(blob_info)
application = webapp.WSGIApplication([('/', MainPage),
('/upload', UploadHandler),
('/serve/([^/]+)?', ServeHandler)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
|
Below we’ll create a python plugin that generates C code from UML state machine diagrams. Doing the same for other languages should be trivial.
The first thing you need is Unai Estébanez Sevilla’s nice finite state machine code generator. The version that we are using here has been abstracted in order to be able to produce code in various languages.
Let’s have a quick look at the code.
First we import the dia python module and the exporter base functionality:
import dia
import uml_stm_export
Then we create our C exporter class that inherits from the generic exporter:
class CDiagramRenderer(uml_stm_export.SimpleSTM):
Next we define how the beginning of our generated code file should look like. That could include general infrastructure independent of the state machine diagram at hand. In our case, we want to encapsulate the generated state machine code within a function:
CODE_PREAMBLE="void config_stm(STM_t* stm) {"
We also define the postamble to close the function. After that comegeneric functions that implement the class constructor and functions responsible for callingthe dia object parser init(self)begin_render(self,data,filename).
Now we define our output generator end_render(self). Wefirst traverse dia’s objects in order to find the state machine’sinitial state:
for transition in self.transitions:
if(transition.source == "INITIAL_STATE"):
The initial state state gets a special treatment: we have a special function call generated for it:
f.write(" add_initial_state( stm, %s, %s );\n" %
(initial_state.name, initial_state.doaction))
Next we traverse all states and output code that will create them, along with functions to be called within that state to decide on where to transition next:
for key in self.states.keys():
f.write(" add_state( stm, %s, %s );\n"
% (state.name, state.doaction))
And finally we output all the transitions between states:
for transition in self.transitions:
f.write(" add_transition( stm, %s, %s, %s );\n" %
(transition.source, transition.trigger, transition.target))
and that’s nearly it. At the end of our generator we make sure to register it with dia:
dia.register_export("State Machine Cstma Dump", "c", CDiagramRenderer())
Done! Simple, isn’t it?
Finally please permit me to thank all the people that created such a powerful tool free for us to use:
Unai Estébanez Sevilla for the original STM generator
Steffen Macke and Hans Breuer, Dia’s current busy maintaners
Alexander Larsson, Dia’s original author
all the other contributors to Dia and free software
Panter for inviting me to their fabulous work week in Greece where most of the hacking on the generator was done and Combitool who supported this work by needing a state machine generator in their current project.
PS: Unai’s original text generator is now also “just” a “simple” addon
|
I'm trying to mark future done by timeout with this code:
import asyncio
@asyncio.coroutine
def greet():
while True:
print('Hello World')
yield from asyncio.sleep(1)
@asyncio.coroutine
def main():
future = asyncio.async(greet())
loop.call_later(3, lambda: future.set_result(True))
yield from future
print('Ready')
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
"Timer" loop.call_later sets result to future after 3 seconds. It works, but I'm getting exception too:
Hello World
Hello World
Hello World
Ready
Exception in callback <bound method Task._wakeup of Task(<greet>)<result=True>>(Future<result=None>,)
handle: Handle(<bound method Task._wakeup of Task(<greet>)<result=True>>, (Future<result=None>,))
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\asyncio\events.py", line 39, in _run
self._callback(*self._args)
File "C:\Python33\lib\site-packages\asyncio\tasks.py", line 337, in _wakeup
self._step(value, None)
File "C:\Python33\lib\site-packages\asyncio\tasks.py", line 267, in _step
'_step(): already done: {!r}, {!r}, {!r}'.format(self, value, exc)
AssertionError: _step(): already done: Task(<greet>)<result=True>, None, None
What can mean this AssertionError? Am I doing something wrong setting future done by loop.call_later?
|
This code below best illustrates my problem:
The output to the console (NB it takes ~8 minutes to run even the first test) shows the 512x512x512x16-bit array allocations consuming no more than expected (256MByte for each one), and looking at "top" the process generally remains sub-600MByte as expected.
However, while the vectorized version of the function is being called, the process expands to enormous size (over 7GByte!). Even the most obvious explanation I can think of to account for this - that vectorize is converting the inputs and outputs to float64 internally - could only account for a couple of gigabytes, even though the vectorized function returns an int16, and the returned array is certainly an int16. Is there some way to avoid this happening ? Am I using/understanding vectorize's otypes argument wrong ?
import numpy as np
import subprocess
def logmem():
subprocess.call('cat /proc/meminfo | grep MemFree',shell=True)
def fn(x):
return np.int16(x*x)
def test_plain(v):
print "Explicit looping:"
logmem()
r=np.zeros(v.shape,dtype=np.int16)
for z in xrange(v.shape[0]):
for y in xrange(v.shape[1]):
for x in xrange(v.shape[2]):
r[z,y,x]=fn(x)
print type(r[0,0,0])
logmem()
return r
vecfn=np.vectorize(fn,otypes=[np.int16])
def test_vectorize(v):
print "Vectorize:"
logmem()
r=vecfn(v)
print type(r[0,0,0])
logmem()
return r
logmem()
s=(512,512,512)
v=np.ones(s,dtype=np.int16)
logmem()
test_plain(v)
test_vectorize(v)
v=None
logmem()
I'm using whichever versions of Python/numpy are current on an amd64 Debian Squeeze system (Python 2.6.6, numpy 1.4.1).
|
#1651 Le 31/05/2012, à 19:24
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Pour le blocage de l'interface ? Essai de mettre le sleep également avant la commande EXEC (ton ordi trop puissant ...)
bien vu, ca semble etre ok avec un sleep 0.10 avant le load.
Hors ligne
#1652 Le 01/06/2012, à 22:49
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bon et bien, c'est pas faute d'avoir cherché Ansuz mais je veut bien quelques explications sur la maniere d'utiliser CONFIG@@ car je n'es rien trouver sur la façon de l'utiliser.
J'ai bien compris qu'il faut renseigner dans le go_ l'option --auto-config='$HOME/fichier.cfg' mais apres CONFIG@@SAVE me fait un fichier vide... et CONFIG@@SET sa sert a quoi exactement ?
edit: Comment créer le fichier de depart ? il se creait automatiquement ? faut pas rajouté un paquet pour sa ?
Dernière modification par yakusa77 (Le 02/06/2012, à 10:37)
Hors ligne
#1653 Le 02/06/2012, à 11:31
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
@yakusa77
Regarde le fichier config.cfg dans le dossier exemple, tu as toutes les sections possibles.
Donc tu dois créer ton fichier de config, avec tes variables. Ensuite, c'est automatique ...
Si tu ne trouve tjrs pas, demande des details à Hizoka, après moi, c'est lui le boss en g2s
PS:Suis pas la ce week, de retour mercredi.
Hors ligne
#1654 Le 02/06/2012, à 11:59
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
d'accord, je comprend mieu...
Hors ligne
#1655 Le 02/06/2012, à 13:12
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bon ba sa va pas etre simple a mettre en place...
Déja pour commencé, j'ai créer le fichier avec des Majuscules correspondant au nom de mes widgets. A l’enregistrent les maj ont disparu...
ensuite âpres avoir modifier les noms de widgets pour retiré toutes les MAJ, je constate que l'etat est apparemment bien enregistré, mais au démarrage de l'interface les widgets reste a leur etat initital c.a.d false dans la plupart des cas...Bref je doit pas etre doué ..
Hors ligne
#1656 Le 02/06/2012, à 14:16
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
re,
Tu dois indiquer une section en majuscule, et dans chaque section, tu y mets tes variable correspondantes aux widgets (comme le config.cfg koi ...)
Donc pour une combo
[COMBO]
_combo1 = 1
Où 1 indique la ligne à selectionner.
Y a que pour window, ou il faut indiquer le nom du widget dans section (voir fichier config.cfg)
Hors ligne
#1657 Le 02/06/2012, à 14:28
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Oui c'est tres exactement ce que jai fait pour les section je me suis inspiré de ton fichier.
Dernière modification par yakusa77 (Le 02/06/2012, à 20:22)
Hors ligne
#1658 Le 02/06/2012, à 14:36
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
dsl, je pars là, suis de retour mercredi ...
Hizoka, à l'aide
Hors ligne
#1659 Le 02/06/2012, à 19:34
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bon et bien, j'ai progressé ! j'avais pas compris qu'il fallait absolument que le sections soit celle noté le fichier default.cfg. Pour ce qui est des toggle, sa fonctionne en revanche les filechooser sa marche pas, les valeurs ne sont pas mise a jour et pas charger dans l'interface. Sa allége beaucoup le code, mais obliger de revoir en profondeur les contrôles et conditions.
Hors ligne
#1660 Le 03/06/2012, à 00:55
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
en effet il faut faire de tres grosses modifs mais c'est pas mal.
Perso ca passe les filechooser...
Hors ligne
#1661 Le 03/06/2012, à 09:08
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
oui, tu peut m'en dire plus ... tu l'utilise sur laquelle de tes apps ? perso ce sont les filechooserbutton qui ne fonctionne pas .
peut on ce servir de ces variables dans des tests, si oui comment on les appels ?EDIT: c'est bon j'ai trouver comment les utilisée en prefixant avec $G2S
c'est vrai que c'est pratique pour des valeurs qui ne change pas pendant la session! pour les toggles qui grise ou degrise un autre widget par exemple sa fonctionne pas comme je le pensait, car les "variables" ne sont pas rafraichi ...
Par contre je suis embetter avec mais filechooser je comprend pas
merci de ton aide
Dernière modification par yakusa77 (Le 03/06/2012, à 10:14)
Hors ligne
#1662 Le 03/06/2012, à 18:47
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
c'est vrai que c'est pratique pour des valeurs qui ne change pas pendant la session! pour les toggles qui grise ou degrise un autre widget par exemple sa fonctionne pas comme je le pensait, car les "variables" ne sont pas rafraichi ...
En admettant que t'ai 2 toggle :
- _toggle_1
- _toggle_2
le fichier de config :
[TOGGLE]
_toggle_1 = True
_toggle_2 = False
En debut de script :
${G2S_toggle_1} vaut True
${G2S_toggle_2} vaut False
Pour modifier la valeur de ta variable :
function _toggle_1 { G2S_toggle_1=${@}; }
function _toggle_2 { G2S_toggle_2=${@}; }
Ainsi ta variable G2S* est mise à jour
D'ou le gros travail de changement à faire sur les projets existants deja.
Mais pour les nouveaux projets il suffit d'utiliser ce type de variable directement.
Et pour mon filechooser :
fichier de cfg :
[FILECHOOSER]
_liste_projet = /home/hizoka/Scripts_et_logiciels/scripts
et mon gtkfilechooserbutton m'affiche bien le bon dossier.
Mais j'ai rencontré des soucis pour mkv extractor gui et le ficlechooser... j'ai mis de coté le temps de trouver et de creuser...
ton filechooser fait quoi ? dossier, fichier ? selection, save ?....
Hors ligne
#1663 Le 05/06/2012, à 08:37
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bon, ansuz la config plante pas mal...
Mon logiciel marche bien, tout le ppa (ajout, suppression, edition d eligne sont ok), mes coches sont bonnes.
Mais lorsque je fais un :
function option_close { echo "CONFIG@@SAVE@@@@${HOME}/.config/lpsm/global.cfg"; }
il me save bien mes valeurs :
configsave_________________________[[ CONFIG SAVE ]] TOGGLE[[ CONFIG SAVE ]] WINDOW:principale[[ CONFIG SAVE ]] COMBO[[ CONFIG SAVE ]] FILECHOOSER[[ CONFIG SAVE ]] TREEVIEW[[ CONFIG SAVE ]] TEXTVIEW[[ CONFIG SAVE ]] MISC[[ CONFIG SAVED ]]
mais apres, plus rien ne marche...
Traceback (most recent call last):
File "./lpsm.py", line 1704, in rappel_toggled
getattr(self.th.IMPORT, name_tree) (
AttributeError: 'MyThread' object has no attribute 'IMPORT'
Traceback (most recent call last):
File "./lpsm.py", line 729, in on_clicked
getattr(self.th.IMPORT, widget.get_name()) ('clicked')
AttributeError: 'MyThread' object has no attribute 'IMPORT'
et je pense que c'est ça qui me fait planter mon mkv extractor gui aussi.
Y a-t-il une commande qui permet de centrer une fenetre ?
j'ai trouvé pour lui donner x et y, mais comment lui dire d'aller au centre ?
J'ai un textview avec coloration, celui-ci n'a pas de valeur dans le fichier de configuration.
Dans le cas où il n'y a pas de valeur à charger, le saut de ligne n'est pas automatique, alors que s'il y a une variable à charger, pas de soucis.
C'est normal ?!
Dernière modification par Hizoka (Le 06/06/2012, à 03:57)
Hors ligne
#1664 Le 06/06/2012, à 11:59
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
@hizo
Thx d'avoir aider Yakuza.
MAJ le git dev, j'ai un peu modifier la config save.
Si ca merde tjrs, essai de poser un sleep apres CONFIG@@SAVE (c'est le soucis d'avoir un vieux pc, mes tests passent bien, mais avec ta formule 1, ca merdoit .... Faudrait vous cotiser pour me payer un ordi recent, y'aura moins de soucis :D )
Y a-t-il une commande qui permet de centrer une fenetre ?
Ben à part la propriétés window-position, qui indique la position initiale, non, il n'existe rien, faut se le faire à la main ...
J'ai un textview avec coloration, celui-ci n'a pas de valeur dans le fichier de configuration... C'est normal ?!
Oui, c'est normal, je force la wrap_mode uniquement si il y a une config pour ce textview, sinon, je laisse libre au codeur de choisir son wrap_mode, je trouve normal ?!
Donc, il faut que tu le fasse toi même:
SET@_textview.set_wrap_mode(gtk.WRAP_WORD)
car les "variables" ne sont pas rafraichi ...
Vi, je ne le fais pas, si tu en a besoin, fait comme Hizoka le préconise ...
Hors ligne
#1665 Le 06/06/2012, à 18:31
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Thx d'avoir aider Yakuza.
C'est normal, pas de soucis !
MAJ le git dev, j'ai un peu modifier la config save.
Ca m'a l'air ok apres 1 test rapide, je continue de tester.
Ben à part la propriétés window-position, qui indique la position initiale, non, il n'existe rien, faut se le faire à la main ...
Mince galere ça...
Oui, c'est normal, je force la wrap_mode uniquement si il y a une config pour ce textview, sinon, je laisse libre au codeur de choisir son wrap_mode, je trouve normal ?!
Je pensais que tu l'avais rendu automatique pour tous les textview, c'est pour ça, je ne savais pas si c'etait un bug ou non.
Hors ligne
#1666 Le 06/06/2012, à 19:36
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Je viens de voir la commande SUBMENU
tu peux me l'expliquer un peu stp ?
c'est bien fait pour ajouter un sous menu ?
donc ca donne : menubar > menuitem > nouveau menu
c'est bien ca ?
e sachant que mon menubar s'appelle : _menubar1
que mon meuitem s'appelle _liste_des_projets
et que je veux ajouter par ex un sous menu qui s'appelle "super test 1" qui appelle sa fonction prout.
est-il possible de choisir le type de sous menu ? image, coche...
voilou, merci à toi.
EDIT : serait- possible d'avoir une commande qui chargerait les valeurs d'un groupe de widget ?
genre :
echo 'CONFIG@@LOAD@TEXTVIEW'
ca serait vraiment pratique car ca eviterait de passer par des HIZO à chaque fois avec tous les traitements qui s'en suivent...
Dernière modification par Hizoka (Le 06/06/2012, à 23:22)
Hors ligne
#1667 Le 07/06/2012, à 18:25
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
je sais que ce n'est pas urgent vraiment, mais ca me serait tres pratique que tu mettes en place ce systeme de LOAD.
Ca m'eviterait de faire tout un taf qui sera obsolete dans quelques jours...
Désolé de te presser mais ça me ferait gagner pas mal de temps...
EDIT : je confirme que CONFIG@@SET ne peut save de valeur contenant des @@ : http://forum.ubuntu-fr.org/viewtopic.ph … 1#p9445361
echo "CONFIG@@SET@@MISC@@ppa_save@@true|ppa:hizo/logiciels|444554 - False|ppa:hizo/kobal|plof"
passe tres bien
echo "CONFIG@@SET@@MISC@@ppa_save@@true|ppa:hizo/logiciels|444554@@False|ppa:hizo/kobal|plof"
Traceback (most recent call last):
File "./lpsm.py", line 3728, in CONFIGSET
section, var, value = sortie.split('@@')[2:]
ValueError: too many values to unpack
ne sauvegarde pas la valeur.
Mon but étant de faire une variable qui me sert de sauvegarde sur la valeur d'un tree.
EDIT 2 : ITER ne semble pas fonctionner...
Je charge un fichier de cfg et je veux donc utiliser ses valeurs aussitot, je fais donc un ITER,mais ca ne passe pas...
# Blocage graphique
echo "SET@window_realized = False"; sleep 0.1
# Chargement du fichier de config
echo "EXEC@@ParseConfig('${cfg}').load_config(self.gui)"
# Deblocage graphique
echo "SET@window_realized = True"; sleep 0.1
echo "ITER@@projet_suite"
function projet_suite
{
echo "###################################
G2S_control_source : $G2S_control_source
G2S_nom_licence : $G2S_nom_licence
G2S_changelog_text : $G2S_changelog_text"
}
il ne connait pas les valeurs...
du coup j'ai du mal à piger là...
EDIT 3 : Je confirme que mon idée pour save la valeur text d'une combo fonctionne : http://forum.ubuntu-fr.org/viewtopic.ph … 1#p9396161
Dans le fichier de config dans MISC : val_combo1 =
Dans le script :
function combo1
{
combo1=${@}
echo "CONFIG@@SET@@MISC@@val_combo1@@${@}"
# Sauvegarde maintenant ou plus tard en fonction de ce qu'on veut
}
Dernière modification par Hizoka (Le 07/06/2012, à 20:15)
Hors ligne
#1668 Le 07/06/2012, à 20:04
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
ton filechooser fait quoi ? dossier, fichier ? selection, save ?....
Désolé de pas avoir repondu... en en fait j'en utilise plusieurs et aucun ne fonctionne avec le fichier config... selection de dossier ou de fichiers.
et bien sur merci d'avoir pris le temps de repondre je pense que pour l'instant je vais pas utilisé cet implementation car mon prog tourne pas trop mal la .
Hors ligne
#1669 Le 07/06/2012, à 20:09
#1670 Le 12/06/2012, à 17:15
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Tiens un nouveau petit bug
J'ai mis le callback on_clicked sur le signal activate d'un menuitem.
Ca marche bien, le soucis c'est que lorsque je fais apparaître la fenêtre au démarrage, il exécute la fonction.
J'ai tester avec ou sans le show de la fenêtre, et ca vient de bien de là.
Est-il possible de filtrer ça via g2s ?
Hors ligne
#1671 Le 21/06/2012, à 13:15
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bjr,
La version Git branche Dev est MAJ.
Donc ajout de la commande HELP, ca affiche les infos dans le terminal
echo HELP@@G2SCOMMANDEecho HELP@@G2Scallbackecho HELP@@CONFIG@@SAVE
Ajout de l'option --lock-cb.
Elle bloque les callbacks, les débloquer dans script via:
echo SET@lock_cb=False
pour les rebloquer
echo SET@lock_cb=True
Voilà ...
Hors ligne
#1672 Le 21/06/2012, à 21:11
yakusa77
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
C'est pour empêcher que les commande ne soit lancer au démarrage ? si c'est sa c'est nikel car moi aussi sa me tracassait sur certain truc...
Hors ligne
#1673 Le 21/06/2012, à 23:15
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
C'est pour empêcher que les commande ne soit lancer au démarrage ? si c'est sa c'est nikel car moi aussi sa me tracassait sur certain truc...
Oui, c'est ca, c'est deja dans l'ancienne version, les callabck ne sont lancés que lorsque l'interface est affichée, mais certain widgets posent soucis, alors j'ai ajouté cette option (qui a son inverse depuis longtemps, --unlock-cb). (tu peux dire merci à hizo ...)
Il est tjrs préférable de laisser la fenêtre sur hide dans glade et de faire un show dans le script qd toutes les options sont chargées (utile avec la config, faudrait que je modifie la doc ...).
Dernière modification par AnsuzPeorth (Le 21/06/2012, à 23:16)
Hors ligne
#1674 Le 26/06/2012, à 02:13
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bon, j'ai un soucis avec CONFIG.
Lorsque je charge totalement un fichier cela me donne :
=> [[ PY ]] => CONFIG@@GET@@@@@@/home/hizoka/.config/lpsm/global.cfg
[[ CONFIG GOT ]]
=> [[ PY ]] => :: FIFO write :: GET@G2S_dput_u="False";G2S_auto_deb="False";G2Sscript_save="";G2S_auto_script="False";G2S_dput_s="False";G2S_projet="3";G2S_script="";G2S_fr="True";G2S_auto_del="False";G2Sppa_tree="False|ppa:|";G2S_dput_f="False";G2Snom_projet="";G2S_en="False";G2Sppa_save="False|ppa:|";G2S_auto_up="False";G2S_dput_o="False";G2S_liste_projet="/home/hizoka/Scripts_et_logiciels/scripts";G2Spackage_val="zenitor";G2S_auto_check="False";
=> [[ PY ]] => DEBUG => in boucle bash : G2S_dput_u="False";G2S_auto_deb="False";G2Sscript_save="";G2S_auto_script="False";G2S_dput_s="False";G2S_projet="3";G2S_script="";G2S_fr="True";G2S_auto_del="False";G2Sppa_tree="False|ppa:|";G2S_dput_f="False";G2Snom_projet="";G2S_en="False";G2Sppa_save="False|ppa:|";G2S_auto_up="False";G2S_dput_o="False";G2S_liste_projet="/home/hizoka/Scripts_et_logiciels/scripts";G2Spackage_val="zenitor";G2S_auto_check="False";
donc, c'est tout bon.
par contre quand je veux charger des variables avec la liste blanche :
=> [[ PY ]] => CONFIG@@GET@@@@_dput_s@@/home/hizoka/.config/lpsm/global.cfg
[[ CONFIG GOT ]]
=> [[ PY ]] => :: FIFO write :: GET@
pas de retour de variable.
et dans le cas de section
=> [[ PY ]] => CONFIG@@GET@@@@TOGGLE@@/home/hizoka/.config/lpsm/global.cfg
[[ CONFIG GOT ]]
=> [[ PY ]] => :: FIFO write :: GET@
c'est le même résultat.
Dans le doute j'ai testé de les sauvegarder, mais il ne se passe rien.
=> [[ PY ]] => CONFIG@@SAVE@@@@TOGGLE@@/home/hizoka/.config/lpsm/global.cfg
[[ CONFIG SAVE ]] TOGGLE
[[ CONFIG SAVED ]]
=> [[ PY ]] => DEBUG => in boucle bash :
et la variable contenu dans le fichier de config ne change pas.
Mais je redis que dans le cas d'un chargement entier ainsi que dans sa sauvegarde, pas de soucis !
Hors ligne
#1675 Le 28/06/2012, à 13:18
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bjr,
Je look ce soir si tout va bien !
Hors ligne
|
I will explain my issue using an example:
A=[[1,2,10],[1,2,10],[3,4,5]]B=[[1,2,30],[6,7,9]]
From these lists of lists, i would like to create a third one:
C=A+B
So i get :
C= [[1, 2, 10], [1, 2, 10], [3, 4, 5], [1, 2, 30], [6, 7, 9]]
Notice that there are three lists inside C ,the [1, 2, 10], [1, 2, 10], [1, 2, 30] lists, which if described in terms of [x,y,z], they have the same x,y but different z.
So i would like to have this new list:
Averaged= [(1, 2, 16.666), (6, 7, 9), (3, 4, 5)]
where we find only one occurrence of the same x,y from lists
[1, 2, 30], [1, 2, 40], [1, 2, 50]
and the average of the corresponding z values (10+10+30)/3=16.666
I tried using for loops at the beginning but ended up trying to do this using defaultdict.
I ended up with this that keeps once the (x,y) but adds and not averages the corresponding z values:
from collections import defaultdict
Averaged=[]
A=[[1,2,10],[1,2,10],[3,4,5]]
B=[[1,2,30],[6,7,9]]
C=A+B
print "C=",C
ToBeAveraged= defaultdict(int)
for (x,y,z) in C:
ToBeAveraged[(x,y)] += z
Averaged = [k + (v,) for k, v in ToBeAveraged.iteritems()]
print 'Averaged=',Averaged
Is it possible to do this with defaultdict? Any ideas?
|
Anything you can do from the command line you can also do from the JSON API which means that the same unlock command could be sent from within code just as easily. To my knowledge there is no pre-built utility capable of this, but the API is simple enough that I can't imagine it being terribly difficult to actually build such a tool.
Edit: It was much easier than expected to do this in Python. Assuming you have Python's JSON-RPC module installed just use this code:
from jsonrpc import ServiceProxy
from getpass import getpass
access = ServiceProxy("http://127.0.0.1:8332")
pwd = getpass("Enter wallet passphrase: ")
access.walletpassphrase(pwd, 60)
Similarly you could call access.walletlock() to lock the wallet on demand and walletpassphrasechange(old, new) to change the passphrase without ever having it see the command line.
Edit 2: I also submitted an issue to the devs on github on your behalf.
Edit 3: A pull request containing my python scripts has been accepted. Downloading the bitcoin source from github now includes scripts for this purpose in contrib/wallettools
Edit 4: A new bug report was filed to request that the builtin command behaves properly.
|
This is a loop I use to interpret key events in a python game.
# Event Loop
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_a:
my_speed = -10;
if event.key == pygame.K_d:
my_speed = 10;
if event.type == pygame.KEYUP:
if event.key == pygame.K_a:
my_speed = 0;
if event.key == pygame.K_d:
my_speed = 0;
The 'A' key represents up, while the 'D' key represents down. I use this loop within a larger drawing loop, that moves the sprite using this:
Paddle1.rect.y += my_speed;
I'm just making a simple pong game (as my first real code/non-gamemaker game) but there's a problem between moving upwards <=> downwards. Essentially, if I hold a button upwards (or downwards), and then press downwards (or upwards), now holding both buttons, the direction will change, which is a good thing. But if I then release the upward button, then the sprite will stop. It won't continue in the direction of my second input.
This kind of key pressing is actually common with WASD users, when changing directions quickly. Few people remember to let go of the first button before pressing the second. But my program doesn't accommodate the habit.
I think I understand the reason, which is that when I let go of my first key, the KEYUP event still triggers, setting the speed to 0. I need to make sure that if a key is released, it only sets the speed to 0 if another key isn't being pressed. But the interpreter will only go through one event at a time, I think, so I can't check if a key has been pressed if it's only interpreting the commands for a released key.
This is my dilemma. I want set the key controls so that a player doesn't have to press one button at a time to move upwards <=> downwards, making it smoother. How can I do that?
|
JavaScript
hiyatran — 2011-08-24T22:56:44-04:00 — #1
I would like to display the elements in my array but it is NOT working. Here's my code:
<HTML>
<HEAD>
<TITLE>Test Input</TITLE>
<script type="text/javascript">
function addtext() {
var openURL=new Array("http://google.com","http://yahoo.com","http://www.msn.com","http://www.bing.com");
document.writeln('<table>');
for (i=0;i<=openURL.length-1;i++){
document.writeln('<tr><td>openURL[i]</td></tr>');
}
document.writeln('</table>');
}
</script>
</HEAD>
<body onload="addtext()">
</BODY>
</HTML>
Here's the ouput:
openURL[i]
openURL[i]
openURL[i]
openURL[i]
It should display:
http://google.com
http://yahoo.com
http://msn.com
http://bing.com
Any comments or suggestions are greatly apprecitated.
thanks
aussiejohn — 2011-08-25T00:28:03-04:00 — #2
The code that displays your variable is actually just outputting a string.
document.writeln('<tr><td>openURL[i]</td></tr>');
To use variables and strings you'll have to concatenate them together. for example:
var someVar = "this is the first part " + anotherVariable + " this is the second part";
In this instance you would do it as follows:
document.writeln('<tr><td>' + openURL[i] + '</td></tr>');
aussiejohn — 2011-08-25T00:38:01-04:00 — #3
Also, just to dig in to your code a bit further (hope you don't mind)
You don't need to use "new Array()" to instantiate arrays, square brackets will do the job
var anArray = ["item 1", "item 2", "item 3"];
"document.write" and it's variants are usually poor for performance, if you're just testing stuff it doesn't matter too much, but it's not the best habit to get in to.
When you're injecting content; where possible, it's better to inject 1 big chunk than several smaller ones.
You could use innerHTML for example to add content to the page
Let's say you have a <div id="test"></div> in your body section somewhere, you could then do something along the following lines:
function addtext() {
var openURL= ["http://google.com", "http://yahoo.com", "http://www.msn.com", "http://www.bing.com"];
var htmlStr = '<table>';
for (i=0;i<=openURL.length-1;i++){
htmlStr += '<tr><td>' + openURL[i] + '</td></tr>';
}
htmlStr += '</table>';
document.getElementById("test").innerHTML = htmlStr;
}
hiyatran — 2011-08-25T03:48:41-04:00 — #4
How would I put my array into the window.open() function
document.writeln('<tr><td> <a href = "" onclick="window.open(\\'http://google.com\\'); return false;">'+openURL[i]+'</td></tr></a>');
So instead of window.open(\'http://google.com\');
I tried window.open(\'+openURL[i]+\');
but it does NOT work
aussiejohn — 2011-08-25T07:18:26-04:00 — #5
When you're passing in openURL[i] you'll need to not escape the single quotes. At the moment you're effectively putting the string "openURL[i]" in window.open.
e.g.
document.writeln('<tr><td> <a href = "" onclick="window.open('+openURL[i]+'); return false;">'+openURL[i]+'</td></tr></a>');
Should do the trick
|
Possible Duplicate:
Creating graph with date and time in axis labels with matplotlib
I don't know how to change the date format when plotting with matplotilib while my data has full date in my dictionary, i only plot hours, minutes, seconds
from datetime import datetime
import matplotlib.pyplot as plt
dico = {'A01': [(u'11/10/12-08:00:01', 2.0), (u'11/10/12-08:10:00', 10.0), \
(u'11/10/12-08:20:01', 5.0), (u'11/10/12-08:30:01', 15.0), \
(u'11/10/12-08:40:00', 7.0), (u'11/10/12-08:50:01', 45.0)],
'A02': [(u'11/10/12-08:00:01', 10.0), (u'11/10/12-08:10:00', 12.0), \
(u'11/10/12-08:20:01', 15.0), (u'11/10/12-08:30:01', 10.0), \
(u'11/10/12-08:40:00', 17.0), (u'11/10/12-08:50:01', 14.0)]}
x = []
y = []
for key in sorted(dico.iterkeys()):
points = [(datetime.strptime(i[0], "%d/%m/%y-%H:%M:%S"), \
i[1]) for i in dico[key]]
points.sort()
x, y = zip(*points)
plt.plot(x, y, label=key)
# plotting
plt.gcf().autofmt_xdate()
plt.legend(loc='upper right')
plt.xlabel('Dates')
plt.ylabel("titre")
plt.title("Modbus")
plt.show()
|
So... I'm working on trying to move from basic Python to some GUI programming, using PyQt4. I'm looking at a couple different books and tutorials, and they each seem to have a slightly different way of kicking off the class definition.
One tutorial starts off the classes like so:
class Example(QtGui.QDialog):
def __init__(self):
super(Example, self).__init__()
Another book does it like this:
class Example(QtGui.QDialog):
def __init__(self, parent=None):
super(Example, self).__init__(parent)
And yet another does it this way:
class Example(QtGui.QDialog):
def__init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
I'm still trying to wrap my mind around classes and OOP and super() and all... am I correct in thinking that the last line of the third example accomplishes more or less the same thing as the calls using super() in the previous ones, by explicitly calling the base class directly? For relatively simple examples such as these, i.e. single inheritance, is there any real benefit or reason to use one way vs. the other? Finally... the second example passes parent as an argument to super() while the first does not... any guesses/explanations as to why/when/where that would be appropriate?
|
I have written code to copy text using action class of selenium webdriver. All I have been able to do is to drag cursor around the text and copy it.
Code snippet :
Actions a = action.clickAndHold(element)
.moveToElement(element1)
.release()
.keyDown(Keys.CONTROL)
.sendKeys("c")
.keyUp(Keys.CONTROL);
a.perform();
Now how do I print this text on display console using java?
|
Django is recommending me that if I am going to only use one server (Apache) to serve both dynamic and static files, then I should serve static files using django.contrib.staticfiles.
So in my settings.py I have loaded django.contrib.staticfiles to my INSTALLED_APPS and django.core.context_processors.static to my TEMPLATE_CONTEXT_PROCESSORS.
I noticed in the admin templates that it links to static files like this (from index.html):
{% load i18n admin_static %}
{% block extrastyle %}{{ block.super }}<link rel="stylesheet" type="text/css" href="{% static "admin/css/dashboard.css" %}" />{% endblock %}
But looking at the template tag admin_static, it's simply a wrapper for static:
from django.conf import settings
from django.template import Library
register = Library()
if 'django.contrib.staticfiles' in settings.INSTALLED_APPS:
from django.contrib.staticfiles.templatetags.staticfiles import static
else:
from django.templatetags.static import static
static = register.simple_tag(static)
So I concluded that because every admin static file is serverd with a admin/... prefix, then the full path (for my case) should be
/usr/lib64/python2.7/site-packages/django/contrib/admin/static
So I set that path to my STATICFILES_DIRS inside settings.py, but Apache still won't serve any static files (after restating the server). Where did I make a mistake in my logic?
|
CSS
810311 — 2013-07-15T07:27:13-04:00 — #1
Hello Good People,
Please, take a look at this site http://yogastudio.atspace.com/
Issue: the following images get nudged down in Opera but look ok in Firefox and Chrome
<img id="mon" src="images/mon_img.png" alt="" width="113" height="113"/>
<img id="tue" src="images/tue_img.png" alt="" width="113" height="113"/>
<img id="wed" src="images/wed_img.png" alt="" width="113" height="113"/>
<img id="thu" src="images/thu_img.png" alt="" width="113" height="113"/>
<img id="fri" src="images/fri_img.png" alt="" width="113" height="113"/>
<img id="sat" src="images/sat_img.png" alt="" width="113" height="113"/>
<img id="sun" src="images/sun_img.png" alt="" width="113" height="113"/>
Any advice is appreciated
ronpat — 2013-07-15T12:15:44-04:00 — #2
It breaks pretty wildly in Firefox, too, depending on the user's font size.
relative positioning is not the best choice for positioning most objects. Between relative positioning and ems for positioning, it breaks according to the user's font size and platform, and browser.
I suggest you reconsider your layout strategy and put each row in its own box, which in turn, are within a "calendar" container.... or something like that.
810311 — 2013-07-15T16:38:01-04:00 — #3
thanks ronpat. should I use any positioning at all for scenario you suggest or just use padding/margin to move boxes around within container? also which length unit you suggest?
ronpat — 2013-07-15T17:26:58-04:00 — #4
If each day-of-the-week were in a separate table, the day-of-the-week image could be absolutely positioned to the top left corner of the table (you cannot p:a within table-cells, of course); thereby, using px to position that image. It should be possible to align the columns vertically by giving each table table-layout:fixed and assigning widths to the columns. It'll take some experimenting, but I believe that will work reasonably well in your fixed width layout.
Something like this:
table {
position:relative;
}
.day-image {
position:absolute:
top:-20px;
left:-20px;
}
<table>
<img class="day-image"></img>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</table>
The table would be repeated for each day-of-the-week with an image.
(untested)
810311 — 2013-07-16T12:13:53-04:00 — #5
I guess I am doing something wrong - still breaks in Firefox when I increase font size.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
<title>Студия Sattva</title>
<meta http-equiv="content-type" content="text/html; charset=windows-1251" />
<link href="css/style_1.css" rel="stylesheet" type="text/css" />
</head>
<body>
<h1>Расписание занятий</h1>
<div id="calendar">
<table class="schedule">
<img class="mon-image" src="images/mon_img.png" alt="" width="113" height="113"/>
<thead class="border">
<tr>
<th scope="col" class="col1"></th>
<th scope="col" class="col2">Время</th>
<th scope="col" class="col3">Занятие</th>
<th scope="col" class="col4">Преподаватель</th>
</tr>
</thead>
<tbody>
<tr class="height">
<th scope="row"></th>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th scope="row">Пн</th>
<td class="strip">7:30 – 8:30</td>
<td class="strip">Цигун</td>
<td class="strip">Светлана по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td>9:00 – 10:30</td>
<td>Хатха-йога</td>
<td>Ольга по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td class="strip">11:00 – 12:00</td>
<td class="strip">Йога для беременных</td>
<td class="strip">Ольга по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td>12:30 – 16:30</td>
<td>Время для индивидуальных занятий йогой</td>
<td>по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td class="strip">17:00 – 18:00</td>
<td class="strip">Йога для беременных</td>
<td class="strip">Ольга по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td>18:00 – 19:00</td>
<td>Оздоровительная йога по улучшению зрения</td>
<td>Ольга </td>
</tr>
<tr>
<th scope="row"></th>
<td class="strip">19:15 – 20:45</td>
<td class="strip">Хатха-йога</td>
<td class="strip">Ольга</td>
</tr>
<tr class="height">
<th scope="row"></th>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table class="schedule">
<img class="tue-image" src="images/tue_img.png" alt="" width="113" height="113"/>
<thead >
<tr id="border" >
<th scope="col" class="col1"></th>
<th scope="col" class="col2"></th>
<th scope="col" class="col3"></th>
<th scope="col" class="col4"></th>
</tr>
</thead>
<tbody>
<tr>
<th scope="row">Вт</th>
<td class="strip">7:30 – 8:30</td>
<td class="strip">Цигун</td>
<td class="strip">Светлана по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td>9:00 – 10:30</td>
<td>Хатха-йога</td>
<td>Ольга по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td class="strip">11:00 – 12:00</td>
<td class="strip">Йога для беременных</td>
<td class="strip">Ольга по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td>12:30 – 16:30</td>
<td>Время для индивидуальных занятий йогой</td>
<td>по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td class="strip">17:00 – 18:00</td>
<td class="strip">Йога для беременных</td>
<td class="strip">Ольга по записи</td>
</tr>
<tr>
<th scope="row"></th>
<td>18:00 – 19:00</td>
<td>Оздоровительная йога по улучшению зрения</td>
<td>Ольга </td>
</tr>
<tr>
<th scope="row"></th>
<td class="strip">19:15 – 20:45</td>
<td class="strip">Хатха-йога</td>
<td class="strip">Ольга</td>
</tr>
</tbody>
</table>
</div><!--end calendar-->
</body>
</html>
/*start table*/
#calendar {
padding: 1em;
border: 1px dashed green;
}
table.schedule {
position: relative;
table-layout: fixed;
border-collapse:collapse;
font-family: Arial, Helvetica, sans-serif;
}
table.schedule thead tr {
color: #ef7eb2;
background: #f1fefe;
}
table.schedule thead tr th {
text-align: left;
padding: 7px 0px 7px 0px;
}
.col1 {
width: 70px;
}
.col2 {
width: 120px;
}
.col3 {
width: 450px;
}
.col4 {
width: 350px;
}
.height {
height: 17px;
}
.mon-image {
position: absolute;
top: 102px;
left: -6px;
z-index: 2;
}
.tue-image {
position: absolute;
top: 287px;
left: -6px;
z-index: 2;
}
.strip {
background: #f1fefe;
}
.border {
border-bottom: 2px solid #ef7eb2;
}
#border {
border-top: 2px solid #ef7eb2;
background-color: #fff;
}
ronpat — 2013-07-16T12:58:01-04:00 — #6
Nope, you're fine. My suggestion was a big fail. Remind me to never post without testing first :/
Hopefully, a better (tested) idea will follow...
810311 — 2013-07-16T14:59:38-04:00 — #7
No worries. We all know we all learn/learned by trial and error. Thanks for your time.
ronpat — 2013-07-16T15:07:58-04:00 — #8
Not to give up! I've come up with a solution that works; however, I'm trying to understand {position:relative; top:-8em;} before submitting it.
810311 — 2013-07-16T15:16:00-04:00 — #9
no worries. thanks.
felgall — 2013-07-16T15:26:57-04:00 — #10
You can't have an image outside the <tr> tag but still inside the table as only table related tags are allowed directly inside the table tag. It needs to be inside a <tr> tag inside the <thead> in order for the HTML to be valid.
Once your HTML is valid it should be easier to ensure that the positioning works consistently.
810311 — 2013-07-16T16:08:08-04:00 — #11
did you mean something like this,felgall :
<thead class="border">
<tr>
<img class="mon-image" src="images/mon_img.png" alt="" width="113" height="113"/>
<th scope="col" class="col1"></th>
<th scope="col" class="col2">Время</th>
<th scope="col" class="col3">Занятие</th>
<th scope="col" class="col4">Преподаватель</th>
</tr>
</thead>
.....or like this
<thead class="border">
<tr><img class="mon-image" src="images/mon_img.png" alt="" width="113" height="113"/></tr>
<tr>
<th scope="col" class="col1"></th>
<th scope="col" class="col2">Время</th>
<th scope="col" class="col3">Занятие</th>
<th scope="col" class="col4">Преподаватель</th>
</tr>
</thead>
ronpat — 2013-07-16T16:23:49-04:00 — #12
This is a tested example solution to your design request... with caveats.
CSS
.border div {
position:relative;
}
#mon {
position:absolute;
top:-13px;
left:-41px;
z-index:2;
}
#tue {
position:absolute;
top:-22px;
left:-36px;
z-index:2;
}
#wed {
position:absolute;
top:-21px;
left:-33px;
z-index:2;
}
#thu {
position:absolute;
top:-20px;
left:-33px;
z-index:2;
}
#fri {
position:absolute;
top:-25px;
left:-36px;
z-index:2;
}
#sat {
position:absolute;
top:-22px;
left:-36px;
z-index:2;
}
#sun {
position:absolute;
top:-25px;
left:-36px;
z-index:2;
}
HTML
<tr class="border">
<th scope="row"><div><img id="tue" src="images/tue_img.png" alt="" width="113" height="113"/></div></th>
<td></td>
<td></td>
<td></td>
</tr>
*** The top underline does not work properly. That row is coded differently and I have not yet figured it out. ***
The plethora of {position:relative;top:negative-somethings} is poor code. It indicates a fundamental misunderstanding about postioning objects on a page. Likewise, the number of {margin-top:negative-somethings} tells the same tale.
HTML is normally coded to flow smoothly and only requires negative postioning for special purposes.
If you can apply the above example to your page, it will almost be fixed.
The images are not consistently placed within their 113x113 image, thus the slightly different offsets.
This has been tested in FF, Opera, Chrome.
Sorry about the goofy post earlier. My brain sometimes takes unexplainable holidays.
felgall — 2013-07-16T17:52:51-04:00 — #13
No.
A <tr> is only allowed to contain <th> and/or <td> tags. The <img> should be inside iehter a <th> or a <td> for it to be valid within a table. The example ronpat shows has the <img> tag inside a <th> so the <tab.e> will validate and hence the CSS can work properly.
810311 — 2013-07-18T16:35:54-04:00 — #14
The plethora of {position:relative;top:negative-somethings} is poor code. It indicates a fundamental misunderstanding about postioning objects on a page. Likewise, the number of {margin-top:negative-somethings} tells the same tale.
I corrected those - it works fine now when I increase fonts. I tested FF Chrome Opera. Yes, I agree that was poor code - I guess sometimes my brain is not there. Sorry. Thanks for pointing this out for me,ronpat.
HTML is normally coded to flow smoothly and only requires negative postioning for special purposes.
Totally agree. Best way is to allow elements take their place naturally and then apply CSS. Thanks again for reminding me.
Here's the link for the corrected version http://yogastudio.atspace.com/
Please, let me know if I need any more corrections.
810311 — 2013-07-18T16:43:45-04:00 — #15
[quote="felgall,post:13,topic:32826"]
No.
A <tr> is only allowed to contain <th> and/or <td> tags. The <img> should be inside iehter a <th> or a <td> for it to be valid within a table. The example ronpat shows has the <img> tag inside a <th> so the <tab.e> will validate and hence the CSS can work properly.
[/quote]
I didn't know about those restrictions. Thanks for your advice,felgall. I've also noticed ronpat put an image inside the div
<th scope="row"><div><img id="tue" src="images/tue_img.png" alt="" width="113" height="113"/></div></th>
why use div inside th?
ronpat — 2013-07-18T17:37:11-04:00 — #16
It looks really good to me. Nicely done!
The validator says that the break tags are malformed:
<samp></br></samp> should be written <samp><br/></samp>
Related to the use of break tags... it is recommended that items be given classes, or some such targeting technique, and that margins or padding be used instead of break tags. Doing so gives you better control over the layout and eliminates unnecessary tags. Your call.
The other thing that you can do to tidy up your code a bit is to crop the transparent area out of the day-of-week images. They will end up as 51 x 52 images.
You will then be able to position all of them with just one css entry (instead of 7).
#mon,#tue,#wed,#thu,#fri,#sat,#sun {
width:51px;
height:52px;
position:absolute;
top:6px;
left:-5px;
z-index:2;
}
Of course, I would suggest replacing all of those ids with one classname.
You can download the resized images here if you wish:
https://www.dropbox.com/sh/e1teog7t6o13ktt/EBO8I71rJ8
EDIT:
Tables and table cells cannot be positioned. So, to provide an "anchor" that can be positioned, I put a <div> inside the <th> and set it to {position:relative}. The image can then be set to {position:absolute} with respect to that <div>.
Cheers!
francky — 2013-07-18T22:41:25-04:00 — #17
Hi 810311,
In addition maybe some refinements:
Table-cells: the empty cells can be removed, with css instead for height positioning. The <td>'s can get more line-height for better readability.
Top-borders: can be attached to the tables Tuesday/Sunday themselves.
Hidden text for the days: can stay hidden by means of absolute positioning of the day-images: above them. Tables with small height must get extra margin-bottom for compensation.
Images: can get some anti-alias.
<title>: with the town in it, visitors don't have to guess in what town the studio is. With the word Йога in it, they don't have to guess what kind of studio it is. Good for Google.
All together the html for a table can be:
<table class="schedule borderTop">
<tbody>
<tr>
<th scope="row" class="col1 day" rowspan="9">Вт<img src="images/tue_img-nw.png" alt="" /></th>
<td class="strip col2">8:00 – 9:00</td>
<td class="strip col3">Энергетическая йога</td>
<td class="strip col4">Ольга по записи</td>
</tr>
<tr>
<td>9:30 – 10:45</td>
<td>Постнатальная (послеродовая) йога</td>
<td>Ольга по записи</td>
</tr>
<tr>
<td class="strip">11:00 – 12:00</td>
<td class="strip">Baby – йога</td>
<td class="strip">Ольга Баранова, по записи, набор в группу</td>
</tr>
...
<tr>
<td class="strip">19:15 – 20:45</td>
<td class="strip">Комплексная йога</td>
<td class="strip">Дима</td>
</tr>
</tbody>
</table>
With the added css:
.schedule {
margin-bottom: 20px;
}
.schedule td {
line-height: 140%;
}
.borderTop {
border-top: 2px solid #EF7EB2;
}
.day {
position: relative;
vertical-align: top;
}
.day img {
position: absolute;
left: 0;
margin-top: -1px;
}
.extraBottom {
margin-bottom: 50px;
}
.bar {
margin-bottom: 10px;
}
.imgfloat {
margin-top: 15px;
}
PS: I guess the class "day" can be combined with the class "col1" (if that is not disturbing other pages).
francky — 2013-07-18T23:17:44-04:00 — #18
.col2 {
width: 9em; /* was: 120px; */
}
... and the time column is expanding with growing font-scale.
810311 — 2013-07-19T17:30:02-04:00 — #19
Break tags are mostly used for single line breaks. Fixed.
#mon,#tue,#wed,#thu,#fri,#sat,#sun
Selector grouping. Yes, class would work too. Thanks,ronpat.
I also used your images.
810311 — 2013-07-19T17:34:02-04:00 — #20
Hi Francky, I like your solution too, especially this part
<th scope="row" class="col1 day" rowspan="9">Вт<img src="images/tue_img-nw.png" alt="" /></th>
It allows to reduce css to the minimum. Good job! Nicely done!
next page →
|
Last week I finally got around to downgrading my laptop from karmic to jaunty. I did this for a couple of reasons. For my laptop the control of external displays regressed from working flawlessly, to crashing everytime it tried to detect an external display that it didnt boot with.
Secondly,, eclipse has some major problems on karmic due to some changes. The fault is with eclipse, but it will be some time before any fixes work through to the eclipse based products I need to work with, so I downgraded.
Planet maemo: category "feed:85141068e640087e3494790d59181094"
Last week I finally got around to downgrading my laptop from karmic to jaunty. I did this for a couple of reasons. For my laptop the control of external displays regressed from working flawlessly, to crashing everytime it tried to detect an external display that it didnt boot with.
Once I got witter to the point that it had multiple views, I immediately wanted to have a nice way to switch between those views. In the first instance I just used buttons which have the advantage of being able to go direct to the view you want, but at the cost of screen space to show the button. Or alternatively needing to go via menus to get to the buttons.
Enter ‘gestures’ I wanted to be able to swipe left or right to switch views, much like on the multi-desktop of the N900. So I did some searching and eventually found reference to gPodder which is also written in python and introduced swipe gestures.
So i dug around the source and found that essentially they capture the position of a ‘pressed’ event and the position of the ‘released’ event and calculate the difference. If it’s over a certain threshold left or right then they trigger the appropriate method.
This seemed reasonable enough, but I couldn’t figure out what object was emitting those signals. As I looked into it I found something better.
The hildon pannableArea emits signals for horizontal scrolling and vertical scrolling. And it does so regardless of whether it will actually scroll.
What this means is that for witter, I use a pannableArea to do kinetic scrolling of the treeview which shows the tweets. There is no horizontal movement, but I can use the following:
pannedWindow.connect('horizontal-movement', self.gesture)
Then in the method gesture I get:
def gesture(self, widget, direction, startx, starty):
if (direction == 3):
#Go one way
if (direction == 2):
#Go rthe other
those numbers do have constants associated, but I haven’t figured out where I am supposed to reference them from, so I’m just using the numbers.
The cool thing about this is that it is quite selective about what constitutes horizontal movement. Going diagonally left and up or down does NOT trigger this signal.
So it’s a pretty nice way to switch between views. Now I need to figure out how to do the cool dragging of views like the desktop, rather than just a straight flip of views.
Posted in maemo, project, SoftwareEngineering Tagged: gestures, hildon, N900, pannablearea, Python, swipe, witter
As a teaser to a future post I thought I’d post an early screenshot of Witter using a custom cell renderer. This is about the first point at which my cell renderer is actually capable of showing tweets at all.
It completely lacks any layout of information, or colouring/sizing of text. But I wanted to put it up to a) contrast with when I’m done, and b) show that it took me nearly 200 lines of code, just to get this far…
Posted in maemo, project, SoftwareEngineering Tagged: custom cellrenderer, gtk, N900, treeview, witter
|
You can't use the same implementation as the result object of os.stat() and others. However Python 2.6 has a new factory function that creates a similar datatype called named tuple. A named tuple is a tuple whose slots can also be addressed by name. The named tuple should not require any more memory, according to the documentation, than a regular tuple, since they don't have a per instance dictionary. The factory function signature is:
collections.namedtuple(typename, field_names[, verbose])
The first argument specifies the name of the new type, the second argument is a string (space or comma separated) containing the field names and, finally, if verbose is true, the factory function will also print the class generated.
Example
Suppose you have a tuple containing a username and password. To access the username you get the item at position zero and the password is accessed at position one:
credential = ('joeuser', 'secret123')
print 'Username:', credential[0]
print 'Password:', credential[1]
There's nothing wrong with this code but the tuple isn't self-documenting. You have to find and read the documentation about the positioning of the fields in the tuple. This is where named tuple can come to the rescue. We can recode the previous example as follows:
import collections
# Create a new sub-tuple named Credential
Credential = collections.namedtuple('Credential', 'username, password')
credential = Credential(username='joeuser', password='secret123')
print 'Username:', credential.username
print 'Password:', credential.password
If you are interested of what the code looks like for the newly created Credential-type you can add verbose=True to the argument list when creating the type, in this particular case we get the following output:
import collections
Credential = collections.namedtuple('Credential', 'username, password', verbose=True)
class Credential(tuple):
'Credential(username, password)'
__slots__ = ()
_fields = ('username', 'password')
def __new__(_cls, username, password):
return _tuple.__new__(_cls, (username, password))
@classmethod
def _make(cls, iterable, new=tuple.__new__, len=len):
'Make a new Credential object from a sequence or iterable'
result = new(cls, iterable)
if len(result) != 2:
raise TypeError('Expected 2 arguments, got %d' % len(result))
return result
def __repr__(self):
return 'Credential(username=%r, password=%r)' % self
def _asdict(t):
'Return a new dict which maps field names to their values'
return {'username': t[0], 'password': t[1]}
def _replace(_self, **kwds):
'Return a new Credential object replacing specified fields with new values'
result = _self._make(map(kwds.pop, ('username', 'password'), _self))
if kwds:
raise ValueError('Got unexpected field names: %r' % kwds.keys())
return result
def __getnewargs__(self):
return tuple(self)
username = _property(_itemgetter(0))
password = _property(_itemgetter(1))
The named tuple doesn't only provide access to fields by name but also contains helper functions such as the _make() function which helps creating an Credential instance from a sequence or iterable. For example:
cred_tuple = ('joeuser', 'secret123')
credential = Credential._make(cred_tuple)
The python library documentation for namedtuple has more information and code examples, so I suggest that you take a peek.
|
I have a scipy.sparse.dok_matrix (dimensions m x n), wanting to add a flat numpy-array with length m.
for col in xrange(n):
dense_array = ...
dok_matrix[:,col] = dense_array
However, this code raises an Exception in dok_matrix.__setitem__ when it tries to delete a non existing key (del self[(i,j)]).
So, for now I am doing this the unelegant way:
for col in xrange(n):
dense_array = ...
for row in dense_array.nonzero():
dok_matrix[row, col] = dense_array[row]
This feels very ineffecient. So, what is the most efficient way of doing this?
Thanks!
|
#1676 Le 28/06/2012, à 20:51
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
re,
Bon, pour les whitelits, il faut indiquer la section et la variable ... Je vais réfléchir pour faire mieux ... Mais je pense que ce sera dur de faire différent, il faut bien indiquer la section et la variable à garder, c pas comme pour la blacklist, plus simple, il suffit de continue si la var ou section s'y trouve.
Hors ligne
#1677 Le 29/06/2012, à 08:44
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
tu peux me donner un exemple concret stp ?
et en effet, il serait vraiment bien de pouvoir indiquer juste ce que l'on veut
Hors ligne
#1678 Le 29/06/2012, à 10:38
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
tu peux me donner un exemple concret stp ?
echo CONFIG@@GET@@@@PANED,_vpaned1
et en effet, il serait vraiment bien de pouvoir indiquer juste ce que l'on veut
Vi, mais ca sera pas possible, pour la whitelist, je pense qu'i n'y a pas d'autres solutions, car on doit pouvoir whitelisté des sections ou des variables, donc ...
Hors ligne
#1679 Le 29/06/2012, à 16:15
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Donc il se passe quoi la ?
il charge uniquement la variable vpaned de la section PANED ?
et si j'ai plusieurs paned à charger, il suffit juste de l'ajouter ?
echo CONFIG@@GET@@@@PANED,_vpaned1,_vpaned2,TOGGLE,_tog1,_tog2
Hors ligne
#1680 Le 29/06/2012, à 16:18
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
et si j'ai plusieurs paned à charger, il suffit juste de l'ajouter ?
Vi c'est ca.
Hors ligne
#1681 Le 29/06/2012, à 16:23
#1682 Le 29/06/2012, à 17:57
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
et pour la white list du save, c'est pareil ?
Vi, c'est le même principe pour toutes les white listes
Hors ligne
#1683 Le 30/06/2012, à 23:33
benoitfra
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bonsoir ou Bonjour si mon message est lu demain
J''ai une question sur glade2script, peut on modifier la taille du texte d'un TEXTVIEW j'ai cherché mais je n'ai pas trouvé.
Sinon, je ne comprend pas comment fonctionne les treeView. J'ai vu qu'il fallait ajouter ligne et colonne dans le fichier go..sh mais après j'ai du mal avec l'utilisation.
Quelqu'un aurait un exemple ultra simple permettant d'ajouter/supprimer une ligne.
Merci d'avance
Hors ligne
#1684 Le 01/07/2012, à 10:38
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Salut benoitfra !Pour les textview :
as tu regardé la commande TEXT@@CREATETAG ?
EDIT :
Pango Attribute Type Constants
pango.ATTR_SIZE => Specifies a font size in thousandths of a point.
=> Mais il faudra revoir avec Ansuzpeorth pour l'utiliser, me rappelle plus trop de ça...
Pour les tree view :
Mon go contient :
-t "@@ppa_tree@@CHECK%%Choice|PPA name%%editable"
qui me permet d'avoir 2 colonnes : une case à cocher | et une sorte de boite d'entry, un texte modifiable.
Je ne charge pas de lignes via le go afin d'etre le plus simple et clair.
Je me retrouve donc avec un tableau où il n'y a que les noms de colonnes et leur type.
Apres, il suffit d'ajouter 2 boutons au niveau du glade :
- bouton "ajouter_une_ligne" avec un callback on_clicked sur l'action clicked.
- bouton "supprimer_une_ligne" avec un callback on_clicked sur l'action clicked.
Il ne reste plus qu'a utiliser le script pour lier les boutons à leur fonction : - Pour ajouter une ligne plusieurs solutions :
1) Tu ajoutes simplement une ligne à la fin du tableau :
function ajouter_une_ligne { echo "TREE@@END@@ppa_tree@@False|Texte bidon editable"; }
=> Ici, j'ajoute une ligne qui sera non cochée (False) et dont la 2e colonne affichera le texte "Texte bidon editable" qui pourra etre modifié à la main apres (cf le go_)
2) Tu ajoutes une lignes à un emplacement précis :
function ajouter_une_ligne { echo "TREE@@INSERT@@ppa_tree@@2@@False|Texte bidon editable"; }
=> ici j'ajoute une ligne identique au cas numero 1 mais qui sera à la ligne 2 (s'il y a deja assez de lignes pour être placé là...)
- Pour supprimer une ligne plusieurs solutions :
1) Tu as le contenu de la ligne :
function supprimer_une_ligne { echo "TREE@@FINDDEL@@ppa_tree@@1@@Texte bidon editable"; }
=> Va recherche la ligne contenant "Texte bidon editable" dans la colonne numero 1 (ca part de 0), et une fois la ligne trouver, il va la supprimer.
2) Tu as le numero de ligne :
function supprimer_une_ligne { echo "TREE@@CELL@@ppa_tree@@2@@"; }
=> Ici je lui dit de remplacer la ligne 2 par le texte "" qui veut dire supprimer la ligne
Pour rappel :
- Le site contenant les explications : https://code.google.com/p/glade2script/wiki/Commandes
- Utilise les exemples fournis dans le fichier tar.gz de g2s (pour l'exemple que tu demande : ExTreeModif)
- Il existe maintenant la commande : echo "HELP@@G2SCOMMANDE" (ex : echo "HELP@@TREE@@END")
Mais n’hésite pas à poster ici aussi
Dernière modification par Hizoka (Le 01/07/2012, à 10:50)
Hors ligne
#1685 Le 01/07/2012, à 10:57
benoitfra
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Merci pour cette réponse claire, je vais faire des tests pour les treeviews
EDIT: Merci Hizoka j'arrive enfin à créer un treeview mais j'ai une erreur qui ne semble pourtant pas influer sur la fenêtre.
au lancement de la fenêtre
Traceback (most recent call last):
File "./glade2script.py", line 744, in on_treeview
arg = self.th.retourne_selection(nom)
AttributeError: 'Gui' object has no attribute 'th'
quand j'ajoute une ligne:
Traceback (most recent call last):
File "./glade2script.py", line 2572, in TREEEND
gobject.idle_add(treeview.scroll_to_cell, num_row)
NameError: global name 'gobject' is not defined
Mais la ligne s'ajoute bien
Dernière modification par benoitfra (Le 01/07/2012, à 12:46)
Hors ligne
#1686 Le 01/07/2012, à 13:09
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bjr,
Pour ta premiere erreur, règle ton glade pour que la window ne soit pas show au démarrage (dans les options de glade), et tu mets un SET@window1.show() au début de ton script (avec un petit sleep avant si necessaire, ca depends de ton pc).
Pour ta deuxième erreur, c'est étrange, cela dit que gobject n'est pas trouvable ???!!! Tu as bien py-gobject d'installé ?
Pour essayer, ouvre une console, tu tape python, tu aura le prompt python, et tape import gobject
Qqles explication pour la taille du texte dans le textview. Il faut d'abord créer un tag, et ensuite l'assigné à l'ensemble du textview.
création du tag, rouge italic, du nom redItalic pour le textview du nom textview:
echo 'TEXT@@CREATETAG@@textview@@redItalic@@style=pango.STYLE_ITALIC,foreground=red'
ensuite, l'assigné au textview:
echo TEXT@@TAG@@textview@@redItalic
Pour les variable pango: http://developer.gnome.org/pango/stable … escription
Pour les propriétés du tag: http://www.pygtk.org/docs/pygtk/class-gtktexttag.html
Pour la commande HELP, il faut passer par le FIFO. On peut envoyer directement des commandes dans le fifo, comme le ferait g2s, ou alors envoyer des commandes vers g2s, tout ca en envoyant des echo directement dans le fifo. Ca sert pour la commande HELP, mais aussi pour essayer toutes les commandes (plus simple que de devoir ajouter des focntions dans le bash et les appeler dans le script).
echo 'echo HELP@@TREE@@END' > /tmp/FIFO*
Dernière précision, utilise la version GIT branche dev, pour la commande HELP par exemple, elle n'est que dans cette version dev.
http://code.google.com/p/glade2script/w … dPage?tm=2
Dernière modification par AnsuzPeorth (Le 01/07/2012, à 13:12)
Hors ligne
#1687 Le 01/07/2012, à 14:05
Hors ligne
#1688 Le 02/07/2012, à 09:44
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
J'ai un bug.
De temps en temps, je n'ai pas reussis à trouver pourquoi ni quand ni comment mais je me retrouve avec ça :
ppa_tree = True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX@@False|ppa:|
au lieu de :
ppa_tree = True|ppa:hizo/logiciels|XXXXXXXX@@False|ppa:hizo/test|XXXXXXXX
je n'ai vu ce soucis qu'avec mon ppa... (pas les autres widgets)
Hors ligne
#1689 Le 02/07/2012, à 11:19
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
@benoifra
Par sécurité, il est préférable de lancer les commandes de "démarrage" en arrière plan. Si tu as un PC puissant, un petit sleep parfois s'impose.
#!/bin/bash
function truc() {
}
function machin() {
}
...
...
...
function start() {
# ici tout ce que tu dois faire au démarrage
sleep 0.5
echo SET@window.show()
echo ....
....
....
}
start &
# il faut libérer la boucle de fin le plus rapidement possible, c'est là que s'opère la communication entre bash et g2s. Pour ca, tjrs lancer tes commandes en arrière plan.
boucle de fin ....
....
@Hizoka
Etrange comme bug !
J'utilise la commande HIZO pour récup le treeview. Si tu lance la commande HIZO (via terminal), tu as ce genre de soucis ? Car logiquement, cette commande récupère juste les lignes du tree ! Là, on voit qu'il y a répétition, why !!! Tu es sur de n'avoir que 2 lignes dans ton tree ?
Hors ligne
#1690 Le 02/07/2012, à 11:49
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Le probleme c'est que j'arrive pas à repeter ce bug...
sinon pas de soucis la majorité du temps.
Mais oui je suis qu'il n'y a que 2 lignes...
Hors ligne
#1691 Le 05/07/2012, à 10:41
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bon voila un bug bien chiant...
Au boot je charge le fichier cfg_global :
[TOGGLE]
_dput_f = False
_dput_o = False
_dput_u = False
_dput_s = False
_auto_check = False
_auto_deb = False
_auto_script = False
_auto_up = False
_auto_del = False
_fr = True
_en = False
[COMBO]
_projet = 0
[FILECHOOSER]
_liste_projet =
[TREEVIEW]
ppa_tree =
[TEXTVIEW]
_script =
[MISC]
ppa_save =
script_save =
nom_projet =
package_val =
Apres je charge mon fichier de cfg specifique :
[ENTRY]
_autre =
_changelog_vlogiciel =
_changelog_urgence_text =
_control_source =
_control_maintainer =
_control_mail =
_control_homepage =
_control_uploaders =
_control_depends =
_control_recommends =
_control_suggests =
_control_enhances =
_control_breaks =
_control_predepends =
_control_conflicts =
_control_description =
[TOGGLE]
_quantal = False
_precise = True
_oneiric = False
_natty = False
_lucid = False
[COMBO]
_control_architecture = 0
_control_essential = 0
_control_priority = 2
_control_section = 4
_changelog_urgence = 0
_nom_licence = 7
_sources_select = 0
_deb_select = 0
_package = 0
[SPIN]
_changelog_version = 1
[TEXTVIEW]
_changelog_text =
[MISC]
changelog_vlogiciel_save =
changelog_version_save =
changelog_text_save =
changelog_urgence_val =
control_section_val =
control_priority_val =
control_architecture_val =
control_essential_val =
nom_licence_val =
Je save mon fichier de cfg specifique via
echo "CONFIG@@SAVE@@@@@@${cfg}"
C'est ok, bonne sauvegarde.
Mais si je resauvegarde pareil, il me mixe les 2 fichiers de config :
[ENTRY]
_autre =
_changelog_vlogiciel =
_changelog_urgence_text =
_control_source =
_control_maintainer =
_control_mail =
_control_homepage =
_control_uploaders =
_control_depends =
_control_recommends =
_control_suggests =
_control_enhances =
_control_breaks =
_control_predepends =
_control_conflicts =
_control_description =
[TOGGLE]
_quantal = False
_precise = True
_oneiric = False
_natty = False
_lucid = False
_dput_f = False
_dput_o = False
_dput_u = False
_dput_s = False
_auto_check = False
_auto_deb = False
_auto_script = False
_auto_up = False
_auto_del = False
_fr = True
_en = False
[COMBO]
_control_architecture = 0
_control_essential = 0
_control_priority = 2
_control_section = 4
_changelog_urgence = 0
_nom_licence = 7
_sources_select = 0
_deb_select = 0
_package = 0
_projet = 0
[SPIN]
_changelog_version = 1.0
[TEXTVIEW]
_changelog_text =
_control_description_suite =
_script =
[MISC]
changelog_vlogiciel_save =
changelog_version_save =
changelog_text_save =
changelog_urgence_val =
control_section_val =
control_priority_val =
control_architecture_val =
control_essential_val =
nom_licence_val =
ppa_save =
script_save =
nom_projet =
package_val =
[FILECHOOSER]
_liste_projet =
[TREEVIEW]
ppa_tree =
Cela fait la meme chose si je sauvegarde via un :
echo "CONFIG@@SAVE@@@@TOGGLE,_quantal@@${cfg}"
Une idée ?
PS : les valeurs sont virées pour plus de clarté.
Hors ligne
#1692 Le 05/07/2012, à 11:37
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Bjr,
C'est etrange, car si tu load un nouveau fichier, ca supprime les variables de l'ancien fichier ....
Surtout que le premier save fonctionne correctement, donc je vois pas pourquoi, tu dois faire un autre truc entre 2 ?
Hors ligne
#1693 Le 05/07/2012, à 11:54
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
MAj git dev, j'ai modifié qqles trucs
Hors ligne
#1694 Le 05/07/2012, à 12:35
#1695 Le 05/07/2012, à 12:59
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Mouais mais en fait je me rends compte que c'est pas tres utilisable...
Je charge le fichier de config global qui contient des variables qui me sont utiles regulierement.
Mais en chargeant un fichier de cfg contenant d'autres variables, les variables globales ne sont plus connues...
Y aurait pas moyen de faire autrement que de recharger le fichier global le faire suivre d'un iter, utiliser la variable, recharger le fichier specifique....
car là, finalement ça sert à rien...
Tu arrives à me suivre ?
Hors ligne
#1696 Le 05/07/2012, à 15:41
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Euhhh, le fonctionnement que tu veux, c'etait comme ça au début, tu as voulu différent, comme actuellement ... J'ai du tout réécrire, je vais pas recommencer
Je vais réfléchir pour une solution, on sait jamais
Hors ligne
#1697 Le 05/07/2012, à 15:59
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
J'ai pas demandé a changer le fonctionnement au contraire ca m'allait bien qu'il fasse comme ca
car j'ai du refaire pas mal de modif pour justement m'adapter au nouveau systeme...
On peut pas faire un systeme ou il regarde le fichier de config et ne save que les variables que celui ci contient ?
et on conserve les lists.
Hors ligne
#1698 Le 06/07/2012, à 10:44
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
J'ai pas demandé a changer le fonctionnement au contraire ca m'allait bien qu'il fasse comme ca smile
car j'ai du refaire pas mal de modif pour justement m'adapter au nouveau systeme...
Ben au début, je ne chargais rien dans l'environnement lors du load ou save, c'est toi qui a voulu le chargement dans l'environnement ... Enfin bref, de toute façon, comme c'est actuellement c'est plus logique, tu load ou save un fichier, il est considéré comme le défaut et il est chargé dans l'environnement.
On peut pas faire un systeme ou il regarde le fichier de config et ne save que les variables que celui ci contient ?
Et tu pourrais pas faire plus simple, les fichiers config identiques, le defaut et les autres que tu charge selon, plutot que d'avoir un morceau par defaut et le reste dans les autre ?
Hors ligne
#1699 Le 06/07/2012, à 18:27
Hizoka
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
ba l’intérêt du truc c'est d'avoir des options globales au logiciel (les config et options), et d'avoir un fichier de config par projet qui contient donc toutes ses infos...
Si je mixe tout en un, toutes les options globales deviennent spécifique au projet...
Je suis en train d'y réfléchir mais je vois pas vraiment à part faire des sed sur chaque fichier de config non ouvert pour faire les modifications d'ordre global...
plutôt con de faire ça...
Une autre idée con serait de faire une boucle sur tous les fichiers de cfg dispo, faire un get sur les variables globales et faire un save sur ces mêmes valeurs...
Ca fait quand même beaucoup de travail inutile...
Y a pas moyen qu'il conserve en mémoire les différentes variables ?
Qu'il ne fasse qu'une mise à jour des variables (ajout des nouvelles et modifications des existantes) ?
Si t'as une idée....
Merci en tout cas...
Hors ligne
#1700 Le 08/07/2012, à 12:08
AnsuzPeorth
Re : [glade2script-GTK2] Interface graphique pour script bash ou autre.
Si je mixe tout en un, toutes les options globales deviennent spécifique au projet...
je vois pas en quoi c'est génant, mais bon, c'est pas mon soft, donc suis pas au courant des tenants et aboutissants ...
Sinon, je pars pour une dizaine de jours, qd je rentre je verrai koi faire ... Mais je pense que je te coderais un module spécifique, car ce que tu veux, c'est vraiment spécial, et pour faire ca par défaut, ca va pas etre facile, alors autant faire un module ....
Hors ligne
|
I need to detect a post_remove signal, so I have written :
def handler1(sender, instance, action, reverse, model, pk_set, **kwargs):
if (action == 'post_remove'):
test1() # not declared but make a bug if it works, to detect :)
m2m_changed.connect(handler1, sender=Course.subscribed.through)
If I change 'post_remove' by 'post_add' it is ok.. Is it a django's bug about post_remove ??
I use that model and I switch beetween two values of 'subscribed' (so one added and one deleted)
class Course(models.Model):
name = models.CharField(max_length=30)
subscribed = models.ManyToManyField(User, related_name='course_list', blank=True, null=True, limit_choices_to={'userprofile__status': 'student'})
I have seen a post with a bug of django, maybe it havn't been fixed... (or it's me ^^)
|
I am trying to find some examples but no luck. Does anyone know of some examples on the net? I would like to know what it returns when it can't find, and how to specify from start to end, which I guess is going to be 0, -1.
you can use
>>> 'sdfasdf'.index('cc')
Traceback (most recent call last):
File "<pyshell#144>", line 1, in <module>
'sdfasdf'.index('cc')
ValueError: substring not found
>>> 'sdfasdf'.index('df')
1
I'm not sure what you're looking for, do you mean
>>> x = "Hello World"
>>> x.find('World')
6
>>> x.find('Aloha');
-1
From here:
str.find(sub[, start[, end]])
So, some examples:
>>> str = "abcdefioshgoihgs sijsiojs "
>>> str.find('a')
0
>>> str.find('g')
10
>>> str.find('s',11)
15
>>> str.find('s',15)
15
>>> str.find('s',16)
17
>>> str.find('s',11,14)
-1
Honestly, this is the sort of situation where I just open up Python on the command line and start messing around:
>>> x = "Dana Larose is playing with find()"
>>> x.find("Dana")
0
>>> x.find("ana")
1
>>> x.find("La")
5
>>> x.find("La", 6)
-1
Python's interpreter makes this sort of experimentation easy. (Same goes for other languages with a similar interpreter)
Return the lowest index in the string where substring sub is found, such that sub is contained in the range [start, end]. Optional arguments start and end are interpreted as in slice notation. Return -1 if sub is not found.
From the docs.
If you want to search for the last instance of a string in a text, you can run rfind.
Example:
s="Hello"
print s.rfind('l')
output: 3
*no import needed
stringEx.rfind(substr, beg=0, end=len(stringEx))
Try this:
with open(file_dmp_path, 'rb') as file:
fsize = bsize = os.path.getsize(file_dmp_path)
word_len = len(SEARCH_WORD)
while True:
p = file.read(bsize).find(SEARCH_WORD)
if p > -1:
pos_dec = file.tell() - (bsize - p)
file.seek(pos_dec + word_len)
bsize = fsize - file.tell()
if file.tell() < fsize:
seek = file.tell() - word_len + 1
file.seek(seek)
else:
break
|
I am trying to do something which appeared to be simple...I am trying to scrape company names of reuters list from this link:
however, I just can't access the company names! Really, after playing around with a lot of xpath queries, I have problems accessing the table. I am trying to grab the names such as "3M company" and "Abbott Laboratories"
Here are snippets of code I have used:
scrape = []
companies =[]
import lxml
import lxml.html
import lxml.etree
urlbase = 'http://reuters.com/finance/markets/index?symbol=us!spx&sortBy=&sortDir=&pn='
for i in range(1:18):
url = urlbase+str(i)
content = lxml.html.parse(url)
item = content.xpath('XPATH HERE')
ticker = [thing.text for thing in item]
Here are the xpaths i have been playing with:
'//*[@id="topContent"]/div/div[2]/div[1]/table/tr[2]/td[1]/a'
'//*[@id="topContent"]/div/div[2]/div[1]/table/tbody/tr[2]/td[1]/a
'/html/body/div[3]/div[3]/div/div[2]/div/table/tbody/tr[3]/td/a'
'/html/body/div[3]/div[3]/div/div[2]/div/table/tr[3]/td/a'
I have tried accessing that one particular table through:'//table[@class="dataTable sortable"]', but have not had any luck
can anyone help? I feel like this is something that someone who knows what they are doing will be able to fix rather quickly THANKS!
|
Did a bit of running around today to get Django sending email via Gmail. It’s simple once you figure it out.
If you’re running 0.96, upgrade to the latest development version or apply the patch from ticket #2897. 0.96 does not support TLS, which Gmail requires. Then add the appropriate values to settings.py:
EMAIL_USE_TLS = True EMAIL_HOST = 'smtp.gmail.com' EMAIL_HOST_USER = 'youremail@gmail.com' EMAIL_HOST_PASSWORD = 'yourpassword' EMAIL_PORT = 587
You can use the shell to test it:
>>> from django.core.mail import send_mail >>> send_mail('Test', 'This is a test', to=['youremail@somewhere.com'])
Edit: Bryan commented that send_mail is deprecated. Use EmailMessage instead:
>>> from django.core.mail import EmailMessage >>> email = EmailMessage('Hello', 'World', to=['youremail@somewhere.com']) >>> email.send()
|
I am using Python to generate some data and have some code like this
num = 0
for i in range(6):
for j in range(6):
num = random.randint(0,7)
#some code here
Instead of producing random numbers, it just makes ten random numbers, and then repeats the sequence for the next nine sets (eg. [1,2,5,1,0,0], [1,2,5,1,0,0], ...). When I run this code again later in the program, it will give me a new set of 6 random numbers, but then repeat it for the next nine sets.
What can I do to prevent this from happening?
|
laurent
Codecs et paquets proprios: dépots plf
Bonjour,
Je me permet ici de recopier l'annonce que keyes a fait sur son blog (parce que je suis fade, mais aussi parce qu'il explique très bien tout ça )
Ça y'est c'est fait, le dépôt PLF pour Ubuntu est enfin accessible! Le but de ce dépôt et d'héberger les paquets litigieux mais très utiles qui ne peuvent rentrés dans les dépôts officiels car pas vraiment légaux ou souffrant de problèmes de copyright ou de brevets!
Peu de paquets disponibles pour l'instant mais c'est un début et l'on n'attends que vous pour contribuer! Pour ce lancement vous pourrez installer les w32codecs (Codecs binaires issus du monde Windows permettant de lire la plupart des formats vidéo comme DivX, MPEG, Real, ...), la JRE et le SDK de Sun (Java) et libdvdcss qui permet de lire les DVDs sous Linux!
Pour activer les dépôts c'est facile ajouter juste ces lignes a votre fichier /etc/apt/sources.list:
deb http://antesis.freecontrib.org/mirrors/ubuntu/plf/ breezy free non-free
deb-src http://antesis.freecontrib.org/mirrors/ubuntu/plf/ breezy free non-free
Vous pouvez mettre a jour votre liste de dépôts avec sudo apt-get update et installer les paquets disponibles en tapant par exemple sudo apt-get install libdvdcss2 w32codecs sun-j2re1.5!
Rapportez nous les problèmes que recontreriez sur notre mailing list, n'hésitez pas non plus a vous y inscrire si vous souhaitez contribuer! A voir aussi, notre page Wiki.
Le blog de keyes: http://placelibre.ath.cx/keyes/index.php
Laurent, petit belge explorant la banquise
Hors ligne
dawar
Re : Codecs et paquets proprios: dépots plf
Mince, aucune réponse a ce post ? Pourtant, pour une bonne nouvelle, c'est une bonne nouvelle !!! Hop un petit up
S'il n'y a pas de solution, c'est qu'il n'y a pas de problème (Devise Shadoks)
Hors ligne
AlexandreP
Re : Codecs et paquets proprios: dépots plf
Ça y'est c'est fait, le dépôt PLF pour Ubuntu est enfin accessible! Le but de ce dépôt et d'héberger les paquets litigieux mais très utiles qui ne peuvent rentrés dans les dépôts officiels car pas vraiment légaux ou souffrant de problèmes de copyright ou de brevets!
Il serait peut-être utile de préciser que le pas vraiment légaux ou souffrant de problèmes de copyright ou de brevets concerne des technologies brevetées aux États-Unis, que leur utilisation est illégale aux États-Unis à cause de cela (d'où aussi la raison pourquoi ces paquets ne peuvent pas faire partie des dépôts officiels) mais que, puisque les brevets ne s'appliquent pas en France et dans de nombreux autres pays du monde, leur utilisation est légale dans ces pays.
«La capacité d'apprendre est un don; La faculté d'apprendre est un talent; La volonté d'apprendre est un choix.» -Frank Herbert
93,8% des gens sont capables d'inventer des statistiques sans fournir d'études à l'appui.
Hors ligne
Cakeman
Re : Codecs et paquets proprios: dépots plf
Ah là j'avoue que c'est un précision importante.
On peut les garder de façon permanente ces dépôts ou il vaut mieux les activer juste quand on en a beoin ?
Hors ligne
laurent
Re : Codecs et paquets proprios: dépots plf
Tu peux les laisser de façon permanente, ce sont juste les codecs et autres, et ils sont compilés pour breezy.
Laurent, petit belge explorant la banquise
Hors ligne
ShaLouZa
Re : Codecs et paquets proprios: dépots plf
Seul léger problème, ils sont tout le temps down.
«D'abord ils vous ignorent, puis ils rient de vous, puis ils vous combattent, puis vous gagnez.» Gandhi
Hors ligne
laurent
Re : Codecs et paquets proprios: dépots plf
euh... pas chez moi (ce 21/10 à 9h55):
Réception de : 10 http://antesis.freecontrib.org breezy/free Packages [546B]
Réception de : 11 http://antesis.freecontrib.org breezy/non-free Packages [1119B]
Réception de : 12 http://antesis.freecontrib.org breezy/free Sources [316B]
Réception de : 13 http://antesis.freecontrib.org breezy/non-free Sources [288B]
61,7ko réceptionnés en 1s (39,4ko/s)
+ téléchargement et MAJ de libdvdcss2
Laurent, petit belge explorant la banquise
Hors ligne
kwakosaure
Re : Codecs et paquets proprios: dépots plf
Excellente nouvelle !
Je vais enfin pouvoir lire mes DVD avec la Breezy.
Les ubuntistes du monde entier vont utiliser ces dépôts, donc c'est pas étonnant que ça rame un peu.
Hors ligne
ShaLouZa
Re : Codecs et paquets proprios: dépots plf
euh... pas chez moi (ce 21/10 à 9h55)
Ben chez moi si, toujours, avec ces dépôts :
deb http://antesis.freecontrib.org/mirrors/ubuntu/plf/ breezy free non-free
deb-src http://antesis.freecontrib.org/mirrors/ubuntu/plf/ breezy free non-free
Le dépôt n'a pas pu être contacté blablabla ...
Oui, je pense aussi que c'est un problème de fréquentation kwakosaure. Enfin j'espère ....
«D'abord ils vous ignorent, puis ils rient de vous, puis ils vous combattent, puis vous gagnez.» Gandhi
Hors ligne
saVTRonic
Re : Codecs et paquets proprios: dépots plf
Excellent, un dépot qui se rendra vite indispensable comme il l'est déjà pour la distribution Mandriva, merci
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Enfin...C'est quand même enervant:
Tu achetes un ordi, un lecteur de DVD qui va bien, tu installes (K)Ubuntu,ou Suse MDK... et là tu dois encore te faire ch... à installer peniblement des codecs proposés du bout des lèvres, comme si tu était un voleur, pour passer des DVD du commerçe duement payés et taxés ! ou de la musique.
C'est l' angoisse quand même !
Si c'est interdit aux USA (Ce dont je n'ai que foutre), comment font-ils pour lire des DVD sur ordi +Linux ,sans passer pour des délinquants ?
8.04 nvidia gnome beryl
Hors ligne
LR
Re : Codecs et paquets proprios: dépots plf
Il me semble que c'est un coup de gueule
J'espère que c'est contre l'industrie du dvd et pas contre ubuntu
Ce que tu peux peut-être faire, c'est installer les trucs litigieux, faire une copie de ton dvd et le ramener au magasin en disant qu'il marche pas sur ton ordi. Si tout le monde fait ça peut-être que ça fera avancer les choses.
Enfin on peut toujours rêver
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Oh, je n'ai rien contre Ubuntu bien sûr, sinon je ne l'utiliserai pas , je ne suis pas maso !
M'enfin s'il n'existait pas un forum actif comme celui-ci, on ferait comment,nous, parce que si c'est juste pour du traitement de texte et le surf...
8.04 nvidia gnome beryl
Hors ligne
dawar
Re : Codecs et paquets proprios: dépots plf
Si c'est interdit aux USA (Ce dont je n'ai que foutre), comment font-ils pour lire des DVD sur ordi +Linux ,sans passer pour des délinquants ?
En plus, ça risque d'être bientôt interdit en Europe et donc en France : http://eucd.info/
Et tout le monde s'en fout, scotché a la star'ac... Monde de merde...
S'il n'y a pas de solution, c'est qu'il n'y a pas de problème (Devise Shadoks)
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Quant à la copie du DVD sous linux ...Mets ton survetement parce que tu vas transpirer avant de trouver de quoi ripper correctement ton DVD...
8.04 nvidia gnome beryl
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Bon, mais c'est pas grave, tout le plaisir c'est justement que ça accroche un peu, sinon si tout marchait trop bien on ne pourrait plus bidouiller nos machines et je pense que s'en embêterait plus d'un non ?
8.04 nvidia gnome beryl
Hors ligne
LR
Re : Codecs et paquets proprios: dépots plf
Mouais c'est vrai que j'ai pas encore réussi à copier un dvd sur ma ubuntu
Mais pour ton sujet de départ, le problème vient du fait que tu veux utiliser des choses propriétaires que ubuntu n'a pas le droit de redistribuer. Il faut donc obligatoirement que tu les installes par toi-même.
C'est sur, c'est chiant, mais la seule solution à ça c'est une distrib payante (mandriva ?) qui elle a acheté le droit de redistribuer ces trucs-là.
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Alors Mandriva, j'ai essayé:
Tu payes pour l'installer (avec les fameux log. proprio)
Tu payes pour la maintenir (Club)
Tu payes pas pour la desinstaller...
8.04 nvidia gnome beryl
Hors ligne
dawar
Re : Codecs et paquets proprios: dépots plf
Alors Mandriva, j'ai essayé:
Tu payes pour l'installer (avec les fameux log. proprio)
Tu payes pour la maintenir (Club)
Tu payes pas pour la desinstaller...
Faut pas raconter n'importe quoi !
J'ai utilisé des années Mandrake quand ca ne s'appelais pas encore Mandriva. Les iso sont gratuites, les mises a jour sont gratuites, et effectivement je n'ai pas payé pour installer Ubuntu par dessus ma Mandrake 10.1...
Mandriva c'est que du logiciel libre, si tu achetes une boite t'as des logiciels proprio en "bonus", mais Ubuntu aurait parfaitement le droit de prendre l'installeur de mandriva, qui est libre comme tout ce qui est concu par cette société.
Et PLF vient justement du monde Mandrake, et redistribue ce qui se trouve dans les powerpack, et même plus, gratuitement.
Dernière modification par dawar (Le 21/10/2005, à 15:10)
S'il n'y a pas de solution, c'est qu'il n'y a pas de problème (Devise Shadoks)
Hors ligne
LR
Re : Codecs et paquets proprios: dépots plf
Tu payes pas pour la desinstaller...
C'est toujours ça
Mais bon... on peut pas tout avoir faut pas rêver. Si tu veux rien payer t'as deux choix : le vol et les logiciels libres.
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Sinon un grand merci à laurent pour son post, ça c'est un grand !
8.04 nvidia gnome beryl
Hors ligne
jolemagnifique
Re : Codecs et paquets proprios: dépots plf
Dawar, il faut que tu m'explique :
Ou sont , par exemple, les drivers Nvidia, dans la version gratuite de Mandriva 2006 ?
8.04 nvidia gnome beryl
Hors ligne
dawar
Re : Codecs et paquets proprios: dépots plf
Il était distribué chez thacs RPM entre autre.
D'ailleur je comprends pas comment Ubuntu peux distribuer les drivers Nvidia et pas Mandriva... Idem pour Flash d'ailleur.
<troll>Mandriva serait-elle plus libre qu'Ubuntu ?</troll>
Dernière modification par dawar (Le 21/10/2005, à 15:41)
S'il n'y a pas de solution, c'est qu'il n'y a pas de problème (Devise Shadoks)
Hors ligne
etiennez
Re : Codecs et paquets proprios: dépots plf
Mandriva pourrait, ils ne le font pas pour la version gratuite, 100% libre, parce qu'une partie de leur business model est basé sur la distribution de programme proprietaire (un petit confort supplémentaire pour les gens qui payent).
extrait du fichier copyright distribué avec les paquets NVIDIA:
A: Not every Linux distribution uses rpm, and NVIDIA wanted a single
solution that would work across all Linux distributions. As indicated
in the NVIDIA Software License, Linux distributions are welcome to
repackage and redistribute the NVIDIA Linux driver in whatever package
format they wish.
Furthermore, an email from NVIDIA:
Greetings, Randall! Comments below:
On 30 Jul 2003, Randall Donald wrote:
> To whom it may concern,
>
> My name is Randall Donald and I am the maintainer for the Debian
> downloader packages nvidia-glx-src and nvidia-kernel-src.
> As stated in your license and the README file
> ( "As indicated in the NVIDIA Software License, Linux distributions
> are welcome to repackage and redistribute the NVIDIA Linux driver in
> whatever package format they wish." )
> I wish to include packages containing the Linux driver files in the Debian archive.
> I'd like to know if it is legally permitted to distribute binary kernel modules
> compiled from the NVIDIA kernel module source and Debian kernel headers.
This is fine; thanks for asking.
> I am also wondering if the "No Separation of Components" clause
> ( No Separation of Components. The SOFTWARE is licensed as a
> single product. Its component parts may not be separated for use
> on more than one computer, nor otherwise used separately from the
> other parts.) applies to splitting the glx driver and kernel module source into
> multiple binary packages.
This is also fine. I believe this section of the license was
intended to prevent users from doing things like using our Windows
control panel with a competitor's display driver (that's not actually
possible, but you get the idea...). In the case of separating the
driver into a glx package and a kernel package (like we used to
do ourselves), this is simply a packaging issue; of course users
will use the packages together when they install.
Please feel free to redistribute the NVIDIA graphics driver.
Thank you for doing this for the NVIDIA+Debian community!
- Andy
I'm a good boy.
Hors ligne
keyes
Re : Codecs et paquets proprios: dépots plf
Et vous êtes tous les bienvenus pour contribuer (créer des paquets mais surtout tester!).
En ce moment c'est RealPlayer, Avidemuxet divx4linux qui sont prêts mais doivent être testés!
Ca se passe ici: http://wiki.ubuntu-fr.org/doc/plf
Hors ligne
|
I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:
Say we have a simple array like this:
a = array([1, 0, 0, 0])
I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:
a[1:] = a[0:3]
This would get the following result:
a = array([1, 1, 1, 1])
Or something like this:
a[1:] = 2*a[:3]
# a = [1,2,4,8]
To illustrate further I want the following kind of behaviour:
for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
Except I want the speed of numpy.
The default behavior of numpy is to take a copy of the slice, so what I actually get is this:
a = array([1, 1, 0, 0])
I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side.
Am I dreaming or is this magic possible?
Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.
The algorithm is this:
while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.
I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:
u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.
Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.
|
In my trial test case, I want to run scripts from my source tree. Trial changes the working directory, so simple relative paths don't work. In practice, Trial's temporary directory is inside the source tree, but assuming that to be the case seems suboptimal. I.e., I could do:
def source_file(p):
return os.path.join('..', p)
Is there a better way?
|
pontiac76
[résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour,
J'ai posté hier un message à propos de mon impossibilité d'aller sur internet sauf sur sur google avec mon installation toute neuve d'Ubuntu 12.04 LTS sur un second disque dur de mon pc fixe.
Devant l'absence de réponse, j'ai relu les règles du forum et tenter d'en respecter la méthode, mais je n'ai pas encore trouvé de solutions, et je remercie la personne bienveillante qui pourra m'aider.
J'ai donc réalisé cette installation avec une image iso vérifiée et en dernier recours, j'ai recommencé avec la version officielle achetée en ligne. Ma connexion ne fonctionne pas non plus avec le live-cd.
J'ai parcouru le forum et la documentation, et après quelques manipulations, j'ai retrouvé une connexion....que j'ai à nouveau perdue au redémarrage, et je ne me souviens plus ce que j'ai dû faire par hasard qui a fonctionné.
Mon pc fonctionne sur Intel pentium 4 et 1Go de ram, il fonctionne correctement avec windows xp pour aller sur le net. Quand j'affiche les informations de connexion dans le Network-manager, qui m'indique que ma liaison filaire est active, il y a toutes les adresses nécessaires de notées.
En parcourant le forum, une personne a proposé quelques lignes de commandes pour faire un auto-diagnostic et je vous le soumets ci-dessous :
jean-marc@jeanmarc-desktop:~$ sudo lshw -C network
[sudo] password for jean-marc:
*-network
description: Ethernet interface
produit: 190 Ethernet Adapter
fabriquant: Silicon Integrated Systems [SiS]
identifiant matériel: 4
information bus: pci@0000:00:04.0
nom logique: eth0
version: 00
numéro de série: 00:13:d3:c6:0f:e8
taille: 100Mbit/s
capacité: 100Mbit/s
bits: 32 bits
horloge: 33MHz
fonctionnalités: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=sis190 driverversion=1.4 duplex=full ip=192.168.1.31 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s
ressources: irq:19 mémoire:febfbc00-febfbc7f portE/S:cc00(taille=128)
jean-marc@jeanmarc-desktop:~$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:13:d3:c6:0f:e8
inet adr:192.168.1.31 Bcast:192.168.1.255 Masque:255.255.255.0
adr inet6: fe80::213:d3ff:fec6:fe8/64 Scope:Lien
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Packets reçus:330 erreurs:223 :4 overruns:0 frame:223
TX packets:426 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
Octets reçus:55441 (55.4 KB) Octets transmis:56392 (56.3 KB)
Interruption:19 Adresse de base:0xdead
lo Link encap:Boucle locale
inet adr:127.0.0.1 Masque:255.0.0.0
adr inet6: ::1/128 Scope:Hôte
UP LOOPBACK RUNNING MTU:16436 Metric:1
Packets reçus:130 erreurs:0 :0 overruns:0 frame:0
TX packets:130 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:0
Octets reçus:9798 (9.7 KB) Octets transmis:9798 (9.7 KB)
jean-marc@jeanmarc-desktop:~$ cat /etc/network/interfaces
auto lo
iface lo inet loopback
jean-marc@jeanmarc-desktop:~$ nm-tool
NetworkManager Tool
State: connected (global)
- Device: eth0 [SFR_BOX] ------------------------------------------------------
Type: Wired
Driver: sis190
State: connected
Default: yes
HW Address: 00:13:D3:C6:0F:E8
Capabilities:
Carrier Detect: yes
Speed: 100 Mb/s
Wired Properties
Carrier: on
IPv4 Settings:
Address: 192.168.1.31
Prefix: 24 (255.255.255.0)
Gateway: 192.168.1.1
DNS: 192.168.1.1
J'ai bien remarqué que des erreurs existaient dans la transmission des paquets avec ifconfig, mais je ne sais les interprétées ni les corriger.
Merci d'avance aux gentils magiciens du terminal.
Dernière modification par pontiac76 (Le 14/08/2012, à 17:05)
Hors ligne
Korak
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour,
Ouvre un terminal et donne le retour de la commande:
ping www.google.be -c 5
Puis de la commande:
ping www.allocine.fr -c 5
OS: Ubuntu 14.04 64 bits + Windows 8.1 64 bits en dualboot (BIOS UEFI, Secure Boot activé et table de partitions GPT)
PC portable HP Pavilion g7-2335sb: Processeur: AMD A4-4300M APU Carte graphique: AMD Radeon HD 7420G Mémoire vive: 6 Go RAM
Je suis Parrain-Linux
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Merci beaucoup de te pencher sur mon problème, voilà ce que dit la console :
jean-marc@jeanmarc-desktop:~$ ping google.be -c 5
PING google.be (173.194.34.56) 56(84) bytes of data.
64 bytes from par03s03-in-f24.1e100.net (173.194.34.56): icmp_req=1 ttl=56 time=1.76 ms
64 bytes from par03s03-in-f24.1e100.net (173.194.34.56): icmp_req=2 ttl=56 time=2.38 ms
64 bytes from par03s03-in-f24.1e100.net (173.194.34.56): icmp_req=3 ttl=56 time=2.32 ms
64 bytes from par03s03-in-f24.1e100.net (173.194.34.56): icmp_req=4 ttl=56 time=2.19 ms
64 bytes from par03s03-in-f24.1e100.net (173.194.34.56): icmp_req=5 ttl=56 time=2.34 ms
--- google.be ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 1.767/2.201/2.384/0.234 ms
jean-marc@jeanmarc-desktop:~$ ping www.allocine.fr -c 5
PING a1758.w7.akamai.net (77.67.11.106) 56(84) bytes of data.
64 bytes from hosted-by.illuminati.es (77.67.11.106): icmp_req=1 ttl=56 time=1.91 ms
64 bytes from hosted-by.illuminati.es (77.67.11.106): icmp_req=2 ttl=56 time=1.87 ms
64 bytes from hosted-by.illuminati.es (77.67.11.106): icmp_req=3 ttl=56 time=2.27 ms
64 bytes from hosted-by.illuminati.es (77.67.11.106): icmp_req=4 ttl=56 time=2.05 ms
64 bytes from hosted-by.illuminati.es (77.67.11.106): icmp_req=5 ttl=56 time=2.37 ms
--- a1758.w7.akamai.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 1.870/2.098/2.376/0.198 ms
jean-marc@jeanmarc-desktop:~$
En te remerciant
Hors ligne
Korak
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Ok, tu arrives a contacter les deux sites.
Ouvre Firefox (ou autre navigateur) et entre l'adresse:
Dis-nous si le site s'affiche correctement.
Même chose avec l'adresse:
OS: Ubuntu 14.04 64 bits + Windows 8.1 64 bits en dualboot (BIOS UEFI, Secure Boot activé et table de partitions GPT)
PC portable HP Pavilion g7-2335sb: Processeur: AMD A4-4300M APU Carte graphique: AMD Radeon HD 7420G Mémoire vive: 6 Go RAM
Je suis Parrain-Linux
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Je me connecte avec Firefox, vu que sans connexion je n'ai pu installer d'autres navigateurs.
La page du site www.google.be s'affiche, et même tous les liens affichés dans la barre du menu en haut. Si j'effectue une recherche dans la barre google, j'ai bien des réponses, mais ne peux me rendre ensuite sur aucun lien.
La page du site www.allocine.fr ne s'affiche pas, et le symbole de l'onglet "Connexion" tourne en permanence...voilà voilà...
Dernière modification par pontiac76 (Le 10/08/2012, à 14:00)
Hors ligne
Korak
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Tu n'aurais pas modifié les paramètres de ta box par hasard?
Comme le pare-feu ou le contrôle parental?
Utilises-tu un proxy?
Dernière modification par Korak (Le 10/08/2012, à 14:02)
OS: Ubuntu 14.04 64 bits + Windows 8.1 64 bits en dualboot (BIOS UEFI, Secure Boot activé et table de partitions GPT)
PC portable HP Pavilion g7-2335sb: Processeur: AMD A4-4300M APU Carte graphique: AMD Radeon HD 7420G Mémoire vive: 6 Go RAM
Je suis Parrain-Linux
Hors ligne
cracoucasse
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour Pontiac 76, j'ai malheureusement exactement le même problème que toi depuis plusieurs mois et je ne trouve aucune solution. On m'a également demandé si je n'avais pas touché au pare-feu ou au contrôle parental (ce qui n'est pas le cas) puis....plus de réponse. Je vais tenter cet après-midi d'installer un nouveau navigateur (opera) afin de voir si cela change quelque chose. Je suis plutôt sceptique puisqu'à ce problème s'ajoutent l'impossibilité d'effectuer les mises à jours et de faire fonctionner la logithèque. Je te tiens au courant...
Hors ligne
Korak
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
On m'a également demandé si je n'avais pas touché au pare-feu ou au contrôle parental (ce qui n'est pas le cas) puis....plus de réponse.
C'est moi qui t'ai posé la question.
Et je n'ai toujours pas de solution, d'où le fait que je n'ai plus répondu.
OS: Ubuntu 14.04 64 bits + Windows 8.1 64 bits en dualboot (BIOS UEFI, Secure Boot activé et table de partitions GPT)
PC portable HP Pavilion g7-2335sb: Processeur: AMD A4-4300M APU Carte graphique: AMD Radeon HD 7420G Mémoire vive: 6 Go RAM
Je suis Parrain-Linux
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Rebonjour,
J'ai juste installé une ip fixe pour mon pc, comme je l'avais vu en conseil dans le forum. Ma box est une neufbox de chez SFR.
Pourquoi le même pc avec la même carte réseau et l'ip fixe configurée ainsi fonctionne avec windows xp et pas avec ubuntu, alors que celui-ci reconnait la connexion active ?
Je ne sais pas ce qu'est un proxy.
Je vais rebooter complètement ma box en attendant et je vous tiens au courant.
J'ai une clef usb installée avec eboost pour augmenter la mémoire de windows. Dans Ubuntu, je l'éjecte, est-ce que ça peut jouer ?
Merci pour toute le peine donnée
Jean-Marc
Hors ligne
Korak
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
J'ai juste installé une ip fixe pour mon pc, comme je l'avais vu en conseil dans le forum.
L'IP fixe m'a déjà posé problème sous Ubuntu.
Depuis, je n'utilise plus que le DHCP.
Fais la même chose et tiens-nous au courant.
OS: Ubuntu 14.04 64 bits + Windows 8.1 64 bits en dualboot (BIOS UEFI, Secure Boot activé et table de partitions GPT)
PC portable HP Pavilion g7-2335sb: Processeur: AMD A4-4300M APU Carte graphique: AMD Radeon HD 7420G Mémoire vive: 6 Go RAM
Je suis Parrain-Linux
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bon merci,
Je vais remettre le DHCP sur la box après l'avoir remise à zéro. Comme je pars en week-end ce soir, je ne vais pas redonner les résultats avant la semaine prochaine.
Bon week-end à toi Korak
Hors ligne
cracoucasse
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
pas de changement après avoir installé Opéra. En ce qui concerne ma connexion, qu'elle soit configurée en dhcp (automatique) ou en ip fixe, rien ne change, même après un hard reboot de la box ou l'utilisation d'autres dns...
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Effectivement cracoucasse, je n'y crois pas trop non plus...ça va devenir un vrai dilemme, mais il faut croire dans nos belles étoiles, beaucoup plus expérimentées que nous sous Linux, même si on gère pas mal de soucis sous windows.
Moi, j'y crois, cela a fonctionné sauf que je ne me rappelle plus la manip magique !
Bon week-end à mes suiveurs(veuses)...
Jean-Marc
Hors ligne
Francine34
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
J'ai le même problème. J'ai une freebox et sur mon compte free, j'ai désactivé l'option proxy WoL; Pour ma connexion internet j'ai coché "pas de proxy", mais ça n'y a rien fait. J'ai accès à mes mails et je peux explorer google maps autant que je veux, mais rien d'autre. Pas d'autre site, pas de mises à jour, pas de logithèque..
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour,
Après avoir réfléchi tout le week-end aux solutions possibles, j'ai choisi de remettre ma sfrbox en configuration usine, en débranchant tous les câbles ethernet, et faire un reset général. Redémarrage et contrôle connexion internet....tout fonctionne sous windows et sur ubuntu rien sauf google.
J'ai déconnecté aussi l'option d'utiliser un serveur proxy dans firefox, et j'ai décoché aussi tous les filtrages proposé par ma box. Je n'ai aucun contrôle parental.
Je n'avance pas et je ne comprends pas. Je me délectais déjà d'utiliser un nouveau système d'exploitation mais ça commence à me prendre la tête...
Bon voilà, pour l'instant je n'ai que des états d'âme et suis à cours d'imagination pour résoudre mon problème, apparemment partagé. Je vais encore aller fouiller un peu partout sur le web et le forum ubuntu.
Si je trouve quelque chose, je ne manquerais pas d'apporter ma contribution à tous mes "pairs".
Salut à tous
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Me voilà enfin requinqué j'ai trouvé mon problème, enfin presque...
J'ai trouvé cela sur le forum :
quand internet est utilisé par XP, après un reboot sur ubuntu, plus d'accès à internet, alors que les connexions ont l'air actives. Je dois alors éteindre l'ordi, couper l'alimentation, attendre que les leds soient éteintes, et je redémarre sur ubuntu, et tout est dans l'ordre.
Sans être vital, cet ennui mérite d'être solutionné, il n'est apparu que récemment, je n'ai pas remarqué quand exactement, pas de rapprochement avec une manoeuvre quelconque... Bref, je nage, ou plutôt je barbotte!
J'ai fait la même manip : éteindre le pc, couper l'alimentaion, redémarrer sous ubuntu....et ça marche !
J'ai fait la mise à jour d'ubuntu. J'ai redémarré avec xp puis ensuite sous ubuntu, plus rien. Je refait la manip précédente, et tout refonctionne, miracle.
Qu'est ce qui est retenu par le pc en mémoire qui empêche la reconnexion, et qu'il perd si on coupe complètement l'alimentation ???
Je n"ai pas trouvé l'origine de ce désagrément, mais le principal est de pouvoir faire les mises à jour ubuntu, surfer, enfin le rêve quoi !
Bon merci à tous qui, grâce à vos questions, vos recherches et vos réponses m'ont permis de remettre en route la machina
Hors ligne
ajourd83
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
bonjour
sur le même pc acces à internet avec xp
j ai plusieurs linux sur un second disque
en 11.10 accès à internet
dès que je monte en version 12.04 jamais d accès internet avec cette version
message reseau déconnecté.
je conserve bien sur l accès avec xp et mes linux en 11.10
si quelqu un à une idée merci d avance
alain 63 ans
en linux depuis 2000
Hors ligne
Animaju
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour
ceci m'avait aidé http://forum.ubuntu-fr.org/viewtopic.ph … 1#p9649361
en particulier
Bonjour,
Installer le paquet linux-firmware-nonfree devrait suffire à résoudre le problème.
pour cela tu dois te connecter en filaire
en espérant que cela t'aide aussi
cordialement
Hors ligne
cracoucasse
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Me voilà enfin requinqué j'ai trouvé mon problème, enfin presque...
J'ai trouvé cela sur le forum :
Sophrolo a écrit :
quand internet est utilisé par XP, après un reboot sur ubuntu, plus d'accès à internet, alors que les connexions ont l'air actives. Je dois alors éteindre l'ordi, couper l'alimentation, attendre que les leds soient éteintes, et je redémarre sur ubuntu, et tout est dans l'ordre.
Sans être vital, cet ennui mérite d'être solutionné, il n'est apparu que récemment, je n'ai pas remarqué quand exactement, pas de rapprochement avec une manoeuvre quelconque... Bref, je nage, ou plutôt je barbotte!
J'ai fait la même manip : éteindre le pc, couper l'alimentaion, redémarrer sous ubuntu....et ça marche !
J'ai fait la mise à jour d'ubuntu. J'ai redémarré avec xp puis ensuite sous ubuntu, plus rien. Je refait la manip précédente, et tout refonctionne, miracle.
Qu'est ce qui est retenu par le pc en mémoire qui empêche la reconnexion, et qu'il perd si on coupe complètement l'alimentation ???
Je n"ai pas trouvé l'origine de ce désagrément, mais le principal est de pouvoir faire les mises à jour ubuntu, surfer, enfin le rêve quoi !
Bon merci à tous qui, grâce à vos questions, vos recherches et vos réponses m'ont permis de remettre en route la machina
Incroyable mais vrai, ça marche !!! Mille merci pontiac76 pour ce tuyau inexplicable...mais si efficace. Soulagé de ne plus avoir à perdre mon temps sous W7. Encore merci et bonne soirée
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Pas de quoi, mais je reste curieux de savoir d'où vient cette bizarrerie !
La carte réseau a t-elle une mémoire ?
Peut-être qu'un ubuntien averti aura une explication et que cette donnée pourra être ajoutée dans la documentation....
En attendant, bon surf avec GNU/Linux
Hors ligne
LRDP
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Pour plus de sécurité:
Alt F2 : fenêtre éditeur de configuration, lancer gconf-editor, cliquer sur apps, puis sur gnome-power-manager, puis sur general : regarder sur la fenêtre de droite si la ligne "network sleep" est cochée, sinon cochez la, cela incite Gnome a garder network actif après une extinction ou un reboot
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour,
J'ai installé gconf-editor, mais je ne trouve pas "gnome-power-manager" dans le chemin proposé.
Sinon, je pense que la solution semble proche. Merci
Hors ligne
kamware
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonsoir tout le monde,
J'ai exactement le même problème!
Connexion réseau ok
Internet ko sauf google
Ping ko
Update et upgrade ko
Quand je redémarre Ubuntu 12.04 en mode généric plus de problème du moins une fois sur 2 ou 3, des que je redémarre en mode PAE de nouveau le même problème.
Je n'y comprend plus rien
Avez-vous une solution à me communiquer svp?
PS: J'ai effectuer toutes les manipulations ci-dessus mais rien ne foctionne
Dernière modification par kamware (Le 23/08/2012, à 19:02)
:-)
Hors ligne
wilson125
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour
Je ne pense pas qu'on puisse vraiment ajouter [resolu] a ce souci ^^
En effet, je viens d'installer la derniere version d'ubuntu sur le pc fixe de mon padre en lui vantant monts et merveilles de cet intuitif systeme d'exploitation.. sauf que j'ai le meme souci que celui sus nommé, sans jamais aller sur windows au prealable (qui d'ailleurs n'est plus disponible, puisqu'ayant remplacer ce dernier par Ubuntu)
Quelqu'un aurait il ce probleme sans passer par le dual boot ?
Mieux encore, quelqu'un aurait il une piste de solution ?
Merci
Hors ligne
pontiac76
Re : [résolu]Google Ok mais pas internet avec Ubuntu 12.04
Bonjour,
J'ai ajouté [résolu] parce que la manipulation "magique" a fonctionné pour moi : éteindre et couper le courant de l'unité centrale et redémarrer avec ubuntu.
Maintenant, c'est vrai que cela doit être un casse-tête pour les initiés, chacun y allant de son intuition mais sans vraiment trouver le bug. Chacun se décarcasse et cela est une démarche solidaire que je salue et je pense qu'un jour, une solution va être trouvée..
Bonne recherche et continuation
Hors ligne
|
JavaScript
ramsz — 2013-03-03T07:09:13-05:00 — #1
Hi there, Ive got the following problem with a code ive come across.
Code:
<script language="javascript">
function checkAge()
{
/* the minumum age you want to allow in */
var min_age = 16;
/* change "age_form" to whatever your form has for a name="..." */
var year = parseInt(document.forms["age_form"]["year"].value);
var month = parseInt(document.forms["age_form"]["month"].value) - 1;
var day = parseInt(document.forms["age_form"]["day"].value);
var theirDate = new Date((year + min_age), month, day);
var today = new Date;
if ( (today.getTime() - theirDate.getTime()) < 0) {
alert("Youre too young!");
return false;
}
else {
return true;
}
}
</script>
This code above works, and when submitted the form, it checks the age for the minimum of 16. As you can see.
Ive got a problem that it doesnt look at the days / month given.
Default the list is set at a value of 00.
I would like to add some additional JS code to check if the day and month have been given.
If not given. An alert box popup with a message will be displayed.
How would be that be achieved? Any advice is more than welcome since im a JS noob. Thanks!
pullo — 2013-03-03T08:31:59-05:00 — #2
Hi Ramsz,
Welcome to the forums
Something like:
if (year == 0 || month == 0 || day == 0){
alert("Please enter something sensible");
}
should do the trick.
ramsz — 2013-03-03T13:36:39-05:00 — #3
YAY! Thanks Also for the quick reply, you rule. And youre the reason ive chosen to stay around at this website and not any other.
Final solution:
<script language="javascript">
function checkAge()
{
/* the minumum age you want to allow in */
var min_age = 16;
/* change "age_form" to whatever your form has for a name="..." */
var year = parseInt(document.forms["age_form"]["year"].value);
var month = parseInt(document.forms["age_form"]["month"].value);
var day = parseInt(document.forms["age_form"]["day"].value);
var theirDate = new Date((year + min_age), month, day);
var today = new Date;
if (month == 0){
alert("..and your month is?");
return false;
}
if (day == 0){
alert("Your birthday is still missing!");
return false;
}
if ( (today.getTime() - theirDate.getTime()) < 0) {
alert("Youre too young, to visit us.");
return false;
}
else {
return true;
}
}
</script>
ramsz — 2013-03-20T07:31:25-04:00 — #4
Hi there, im back again. With a problem The page works fine in all modern browsers, but my client wants it to work in ancient IE7.
How could this be achieved? Currently it doesnt check, you press check and it just moves on to the website. Without checking the age.
My final code, of the entire index page, can be see here: http://pastebin.com/ZW9eV4T7
Thanks!
paul_wilkins — 2013-03-24T23:33:01-04:00 — #5
The problem is happening because you are attempting to access the SELECT element incorrectly. It has no value property. Modern web browsers set the value property, but not IE7 or earlier.
To correctly access the select value, means getting the selectedIndex of the field, and using that number to access the options array, so that you can get the text of that option.
The long way of doing that is with:
var year = parseInt(document.forms["age_form"]["year"].options[document.forms["age_form"]["year"].selectedIndex].text)
A better way is to save the year field to a variable, so that you can more easily make use of it.
var yearField = document.forms["age_form"]["year"],
year = parseInt(yearField.options[yearField.selectedIndex].text),
monthField = document.forms["age_form"]["month"],
month = parseInt(monthField.options[monthField.selectedIndex].text),
dayField = document.forms["age_form"]["day"],
day = parseInt(dayField.options[dayyearField.selectedIndex].text);
And even better, is to reduce the amount of duplication, and use a function to get the selected value.
function getSelectedValue(form, fieldName) {
var field = document.forms[form][fieldName];
return field.options[field.selectedIndex].text;
}
var year = parseInt(getSelectedValue('age_form', 'year')),
month = parseInt(getSelectedValue('age_form', 'month')),
day = parseInt(getSelectedValue('age_form', 'day'));
paul_wilkins — 2013-03-24T23:51:46-04:00 — #6
Further on from here, I would fix some things up, by moving the script out of the head and to the end of the body instead.
Then, I would fix how the form is being accessed, because the name attribute should only be used for naming form fields. The form itself should be identified by an id attribute instead.
<form id="age_form" action="..." method="post">
To which I would then remove any inline events, and use scripting to perform those instead.
<input type="submit" <font color='"red"'>[s]id="submit"[/s]</font> name="_send_date_" value="Check" <font color='"red"'>[s]onClick="return checkAge()"[/s]</font>>
document.getElementById('age_form').onsubmit = checkAge;
Which will then allow me to access the form more correctly, by using and object that refers directly to the form itself, instead of just some text:
function getSelectedValue(form, fieldName) {
var field = form.elements[fieldName];
return field.options[field.selectedIndex].text;
}
How do we get that direct reference to the form? We can do that quite easily by making good use of the this keyword, which refers to the element that the scripting event was assigned to.
function checkAge() {
var form = this, // The this keyword refers directly to the form, when scripting is used to assign the event function
min_age = 16,
year = parseInt(getSelectedValue(form, 'year')),
month = parseInt(getSelectedValue(form, 'month')),
day = parseInt(getSelectedValue(form, 'day')),
...
But that's just matters of style, which help to make things more flexible for yourself when later maintenance occurs.
The important thing is that you understand why the initial code wasn't working, and how to fix things.
paul_wilkins — 2013-03-24T23:53:41-04:00 — #7
By the way, be very very careful with this code:
theirDate = new Date((year + min_age), month, day);
The month value ranges from 0 to 11, not from 1 to 12.
|
CoffeeScript Looping, Objects and Builds, Page 2
Looping with Comprehensions in CoffeeScript
Another interesting CoffeeScript feature is its particular approach to looping statements. In lieu of the traditional for statement you can use what the CoffeeScript documentation refers to as a comprehension. This syntax is incredibly succinct, allowing you to loop over an array using just one line of code, as demonstrated here:
positions = [
'38.894505, -77.025034',
'38.904483, -77.036048',
'38.897041, -77.023521',
'38.894505, -77.025034'
]
alert position for position in positions
The last line forms the looping statement, declaring that alert(position) should execute once for every element (stored in position) in the positions array. Once compiled, the resulting JavaScript looks like this:
(function() {
var position, positions, _i, _len;
positions = ['38.894505, -77.025034', '38.904483, -77.036048',
'38.897041, -77.023521', '38.894505, -77.025034'];
for (_i = 0, _len = positions.length; _i < _len; _i++) {
position = positions[_i];
alert(position);
}
}).call(this);
Creating Objects in CoffeeScript
locations =
embassy:
name: 'Former Soviet embassy'
latitude: 38.904483
longitude: -77.036048
museum:
name: 'International Spy Museum'
latitude: 38.897041
longitude: -77.023521
Compiling this snippet produces the following JavaScript object:
var locations;
locations = {
embassy: {
name: 'Former Soviet embassy',
latitude: 38.904483,
longitude: -77.036048
},
museum: {
name: 'International Spy Museum',
latitude: 38.897041,
longitude: -77.023521
}
};
Having Cake with Your Coffee
Beyond the enormous set of syntactical conveniences, CoffeeScript is even bundled with its own build system which you can use to automate a wide variety of JavaScript-related tasks. Called Cake (not to be confused with CakePHP), you can create build files which define tasks (conveniently written in CoffeeScript) for automating anything which strikes your fancy, such as executing JSLint, running JSUnit unit tests, or building documentation.
Where to From Here?
Although the project has attracted a great deal of interest, CoffeeScript is still in its infancy and learning resources are few and far between. Fortunately, the CoffeeScript documentation is incredibly well done. In particular I suggest reviewing the examples and resources section, which among other things contains links to several impressive projects implemented using CoffeeScript, among them the amazing tank game orona. Additionally, be sure to check out Jamis Buck's amazing CoffeeScript-driven mazes.
Are you currently doing anything with CoffeeScript? If so tell us about it in the comments!
About the Author
Jason Gilmore-- Contributing Editor, PHP--is the founder of EasyPHPWebsites.com, and author of the popular book, "Easy PHP Websites with the Zend Framework". Jason is a cofounder and speaker chair of CodeMash, a nonprofit organization tasked with hosting an annual namesake developer's conference, and was a member of the 2008 MySQL Conference speaker selection board.
Originally published on http://www.developer.com.
Page 2 of 2
|
So I'm going to start writing every day. Hold me to that, please.
Today I'm writing about one little corner of Python, the property function. It's a builtin function, around since at least 2.2.
I used a question that involved property as one of theinterview questions during a recent developer search, and I foundabout a 50-50 split of people who knew about it, and those who didn't,and without much correlation to how much Python experience thedeveloper had. So I think it's one of those things that you onlyreally use if you know about it - it's by no means essential, and youcan go your whole Python career not knowing about it, but, well, astoolboxes go, this is a pretty nifty screwdriver.
I'm going to show basic usage, and then a couple ways to abuse it.
Basic Usage
class Person(object):
def __init__(self, first_name, last_name):
# pretty straightforward initialization, just set a couple
# attributes
self.first_name = first_name
self.last_name = last_name
def get_full_name(self):
return "%s %s" % (self.first_name, self.last_name)
full_name = property(get_full_name)
And how it works in an interactive session:
>>> me = Person("Adam", "Gomaa")
>>> me.get_full_name()
'Adam Gomaa'
>>> me.full_name
'Adam Gomaa'
Note the lack of parens on the last input line; despite the lack of anexplicit call, get_full_name apparently got called anyway!
This is what the property builtin does: it allows you toset getters and setters under some name on instances of the class. Ionly did a getter in this case, but setters are also possible as theoptional second argument:
class Person(object):
# ...
def get_full_name(self):
return "%s %s" % (self.first_name, self.last_name)
def set_full_name(self, full_name):
self.first_name, self.last_name = full_name.split()
full_name = property(get_full_name, set_full_name)
Why?
Now why would you want to do this? In many cases, it's becausesomething you used to have as an actual instance attribute (say,.url) became a computed value, based on other instanceattributes. Now, you could update all your code to call.get_url() instead, and write a .set_url()if that's possible... or you could just turn .url into aproperty.
Be sure, though, to set some ground rules for yourself. Remember, you're simulating an attribute lookup, which, in Python, usually means looking up a value in a dictionary based on a short string key - one of the fastest, most-optimized pieces of Python. So, don't make your property getter overly complex. In general, I'd say that it should have one and only one code path - conditionals are acceptable, but you're better off without them if you really want this to be a leakless abstraction. And stay far, far away from anything that has side effects in property getters - you'll drive yourself and other programmers crazy.
(Don't worry, I'll break all these rules by the time I finish this article.)
Decorator Tricks
The signature of property is:
property(fget=None, fset=None, fdel=None, doc=None)
When you only need a getter - which is the most common use case - then the relevant part becomes:
property(getter)
And so your code, inside your class is going to look something like:
class MyObject(object):
def get_something(self):
return whatever
something = property(get_something)
In fact, at that point, you don't really care aboutget_something. We could even do something like:
def something(self):
return whatever
something = property(something)
Which, as long as you're using Python 2.4 or above, can be shorthanded to:
@property
def something(self):
return whatever
At that point, accessing .something will call this getterfunction, without the parens. Be sure not to use.something(), or you'll be calling whatever is returned(helpfully named 'whatever' in the example above), and if it's notcallable, you'll get an exception.
Defining Getter, Setter, and Deller in One
Let's take an example adapted from one of the comments at ActiveState:
def Property(func)
return property(**func())
class Person(object):
@Property
def name():
doc = "The person's name"
def fget(self):
return "%s %s" % (self.first_name, self.last_name)
def fset(self, name):
self.first_name, self.last_name = name.split()
def fdel(self):
del self.first_name
del self.last_name
return locals()
The advantage to this strategy is that it allows you to define all thearguments to property without filling up the class'snamespace with _get_foo, _set_foo, and soon.
The disadvantage is that you already have 3 levels of indentation in your 'top-level' getter function code. I personally avoid it for this reason. As noted before, though, most of the time you can get away with not having a setter at all.
Example: Django URLs
For a long time, in the Django world, .get_absolute_url()was the way you got URLs forobjects. django.core.urlresolvers.reverse has gained someground - and in terms of DRYness, is a better answer - but let me tellyou, reverse("appname-modelname", args=[object.slug]) isnot so fun, particularly when you change that URL pattern name andhave to update a few dozen {% url %} tags. (sed and grephelp a lot with that, but many developers, myself included on mostdays, don't have enough sed-fu to use it without the man page open).
.get_absolute_url() has it's own set of problems. Forstarters, it ties the model layer to the URL layer, kind of, sort of,maybe. But reverse() is ugly because you have to rememberwhat the arguments are, what order they're in, and how you're supposedto get the damned things anyway. If you're doing a RESTful,hierarchy-oriented URL scheme, this can get ugly fast:
# to get '/books/a-midsummer-nights-dream/reviews/overall-nice-odd-grammar/update/':
review_update_url = reverse("books-review-update", args=[self.book.slug, self.slug])
And, there's no way of knowing that you need the book slug, as opposedto the book id, unless you go look up the URLconf. And that'llinterrupt your flow, or you can just guess, and then you'll interruptyour flow anyway to debug the NoReverseMatch that you just got.
Enter property. Who's really the authority on Review'sURLs? Well, technically, the URLconf. But in some ethereal sense,shouldn't Review be arbitrating it's own URLs? That'swhat .get_absolute_url() did, and it worked pretty well;a considered, balanced 'denormalization' for the sake of convenienceat the expense of DRY.
Unfortunately, get_absolute_url() has its own problem (besides thetheoretical ones) - namely, there's only one of it. For each object,typically, you'll have several URLs:
/books/ - an index/list/search page
/books/new/ - A new submission form
/books/(regexp)/ - a view/update page
/books/(regexp)/delete/ - A delete/confirm delete page
/books/(regexp)/reviews/ - *another* index/list/search page
...
At first, property only makes this less painful:
from django.core.urlresolvers import reverse
class Book(models.Model):
# for compatibility, I'm leaving .get_absolute_url()
def get_absolute_url(self):
return reverse("books-view", args=[self.slug])
absolute_url = property(get_absolute_url)
@property
def reviews_url(self):
return reverse("books-reviews", args=[self.slug])
@property
def delete_url(self):
return reverse("books-delete", args=[self.slug])
But if you're paying attention, you'll see all these seem to follow the same pattern:
@property
def SOMETHING_url(self):
return reverse("books-SOMETHING", args=[self.slug])
If you can see where I'm going, now's probably the time to start running away.
Now, duplicated code means "think about an abstraction." I'm about to break the first rule I set, about not making properties overly complex. In reality, a quick little method could do what I'm about to show you in a much easier way:
def object_url(self, url_type):
if url_type == "reviews":
return reverse(...
elif url_type == "delete":
# .. and so on
But come on now, that wouldn't be very much fun, would it?
Overly Complex Properties
So instead, let's think about how we would want to define a named set of urls for a single object. We could do it with multiple FOO_url properties:
book.absolute_urlbook.reviews_urlbook.delete_url
Or we could make a .url property with dictionary access:
book.urls['absolute']book.urls['reviews']book.urls['delete']
Or heck, even with attribute access
book.urls.absolutebook.urls.reviewsbook.urls.delete
At that point, your view code for redirects becomes utterly wonderful:
return HttpResponseRedirect(book.urls.reviews)
and so on. I just love that, because it's so astonishingly close to what I'm trying to say: send them to the reviews page for this book.
I'm going to show the attribute access code, but you can pretty muchsubstitute for dictionary syntax by replacing __getattr__with __getitem__ in this code:
def attrproperty(getter_function):
class _Object(object):
def __init__(self, obj):
self.obj = obj
def __getattr__(self, attr):
return getter_function(self.obj, attr)
return property(_Object)
Of course, that's still rather indented, but at least its a library function rather than something you're putting into your model code. Usage looks something like this:
class Book(models.Model):
@attrproperty
def urls(self, name):
if name == "absolute":
urlpattern_name = "books-view"
elif name == "reviews":
urlpattern_name = "books-reviews"
elif name == "delete":
urlpattern_name = "books-delete"
return reverse(urlpattern_name, args=[self.slug])
Thus allowing book.urls.whatever. Dictionary syntax isactually a little nicer, since you can more easily stick a variable inthere (books.urls[action], for example) but I like thelook of attribute access.
Anyway, it's pretty obvious that this is a gross abuse of property. But I'm not satisfied yet.
Caching: Side-Effect Properties
Let's go back to our .FOO_url properties from before. Onething about reverse() calls is that if you have totraverse intermediary models, building urls can be, well,expensive. If Review #142 has to look up the slug of Book #21, itprobably has to load that object from the database (unless your PK isalso what you're using in the URL, but we can't count on that). Thatcan make rendering an HTML page, with dozens or hundreds of links and.FOO_url accesses, kind of expensive.
But, of course, you're coding so those URLs don'tchange, right? So why recompute it each time? Just computeit once for each object, and throw it into a cache.
from functools import wraps
from django.core.cache import cache
def cached_property(func):
@wraps(func)
def _closure(self):
cache_key = "%s.%s.%s(%s)" % (self.__class__.__module__,
self.__class__.__name__,
func.__name__,
self.pk)
val = cache.get(cache_key)
if val is None:
val = func(self)
cache.set(cache_key, val)
return val
return property(_closure)
Throw that around your .FOO_url properties:
@cached_property
def reviews_url(self):
return reverse("books-reviews", args=[self.slug])
@cached_property
def delete_url(self):
return reverse("books-delete", args=[self.slug])
and now they'll only make DB calls the first time they're called, per object that accessed on. That's certainly acceptable.
You could combine this with @attrproperty to get cachedobj.urls.foo access, but that will be left as an exercisefor the reader.
Finally...
That's about all I have for today. While researching this post I looked into the Python descriptor protocol, which I might make the subject of a future post.
|
I'm new to Django so pardon if this is a simple question but I've had a hard time phrasing it. I've looked for an answer quite a while already.
Suppose I'm building a very simple gradebook app.
models.py (with code ommited)
class Course(models.Model):
...
class Student(models.Model):
students = models.ManyToManyField(Course)
...
class Assignment(models.Model):
course = models.ForeignKey(Course)
student = models.ForeignKey(Student)
...
Is there a way to set the models.py up so that if I add an Assignment to a Course, it automatically associates the Assignment to all the students enrolled in the course?
|
Configuring and Managing WebLogic JDBC
In WebLogic Server, you can configure database connectivity by configuring JDBC data sources and multi data sources and then targeting or deploying the JDBC resources to servers or clusters in your WebLogic domain.
Each data source that you configure contains a pool of database connections that are created when the data source instance is created—when it is deployed or targeted, or at server startup. Applications lookup a data source on the JNDI tree or in the local application context (java:comp/env), depending on how you configure and deploy the object, and then request a database connection. When finished with the connection, the application calls connection.close(), which returns the connection to the connection pool in the data source.
Figure 2-1 shows a data source and a multi data source targeted to a WebLogic Server instance.
For more information about data sources in WebLogic Server, see Configuring JDBC Data Sources.
A multi data source is an abstraction around a data sources that provides load balancing or failover processing between the data sources associated with the multi data source. Multi data sources are bound to the JNDI tree or local application context just like data sources are bound to the JNDI tree. Applications lookup a multi data source on the JNDI tree or in the local application context (java:comp/env) just like they do for data sources, and then request a database connection. The multi data source determines which data source to use to satisfy the request depending on the algorithm selected in the multi data source configuration: load balancing or failover. For more information about multi data sources, see Configuring JDBC Multi Data Sources.
A key to understanding WebLogic JDBC configuration and management is that who creates a JDBC resource or how a JDBC resource is created determines how a resource is deployed and modified. Both WebLogic Administrators and programmers can create JDBC resources:
Table 2-1 lists the JDBC module types and how they can be configured and modified.
WebLogic JDBC configuration is stored in XML documents that conform to the weblogic-jdbc.xsd schema (available at http://www.bea.com/ns/weblogic/90/weblogic-jdbc.xsd). You create and manage JDBC resources either as system modules, similar to the way they were managed prior to version 9.0, or as application modules. JDBC application modules are a WebLogic-specific extension of J2EE modules and can be configured either within a J2EE application or as stand-alone modules.
When you create a JDBC resource (data source or multi data source) using the Administration Console or using the WebLogic Scripting Tool (WLST), WebLogic Server creates a JDBC module in the config/jdbc subdirectory of the domain directory, and adds a reference to the module in the domain's config.xml file. The JDBC module conforms to the weblogic-jdbc.xsd schema (available at http://www.bea.com/ns/weblogic/90/weblogic-jdbc.xsd).
JDBC resources that you configure this way are considered system modules. System modules are owned by an Administrator, who can delete, modify, or add similar resources at any time. System modules are globally available for targeting to servers and clusters configured in the domain, and therefore are available to all applications deployed on the same targets and to client applications. System modules are also accessible through JMX as JDBCSystemResourceMBeans.
Data source system modules are included in the domain's config.xml file as a JDBCSystemResource element, which includes the name of the JDBC module file and the list of target servers and clusters on which the module is deployed. Figure 2-2 shows an example of a data source listing in a config.xml file and the module that it maps to.
Similarly, multi data source system modules are included in the domain's config.xml file as a jdbc-system-resource element. The multi data source module includes a data-source-list parameter that maps to the data source modules used by the multi data source. The individual data source modules are also included in the config.xml. Figure 2-3 shows the relationship between elements in the config.xml file and the system modules in the config/jdbc directory.
In this illustration, the config.xml file lists three JDBC modules—one multi data source and the two data sources used by the multi data source, which are also listed within the multi data source module. Your application can look up any of these modules on the JNDI tree and request a database connection. If you look up the multi data source, the multi data source determines which of the other data sources to use to supply the database connection, depending on the data sources in the data-source-list parameter, the order in which the data sources are listed, and the algorithm specified in the algorithm-type parameter. For more information about multi data sources, see Configuring JDBC Multi Data Sources.
JDBC resources can also be managed as application modules, similar to standard J2EE modules. A JDBC application module is simply an XML file that conforms to the weblogic-jdbc.xsd schema and represents a data source or a multi data source.
JDBC modules can be included as part of an Enterprise Application as a packaged module. Packaged modules are bundled with an EAR or exploded EAR directory, and are referenced in all appropriate deployment descriptors, such as the weblogic-application.xml and ejb-jar.xml deployment descriptors. The JDBC module is deployed along with the enterprise application, and can be configured to be available only to the enclosing application or to all applications. Using packaged modules ensures that an application always has access to required resources and simplifies the process of moving the application into new environments. With packaged JDBC modules, you can migrate your application and the required JDBC configuration from environment to environment, such as from a testing environment to a production environment, without opening an EAR file and without extensive manual JDBC reconfiguration.
In contrast to system resource modules, JDBC modules that are packaged with an application are owned by the developer who created and packaged the module, rather than the Administrator who deploys the module. This means that the Administrator has more limited control over packaged modules. When deploying a resource module, an Administrator can change resource properties that were specified in the module, but the Administrator cannot add or delete modules. (As with other J2EE modules, deployment configuration changes for a resource module are stored in a deployment plan for the module, leaving the original module untouched.)
By definition, packaged JDBC modules are included in an enterprise application, and therefore are deployed when you deploy the enterprise application. For more information about deploying applications with packaged JDBC modules, see Deploying Applications to WebLogic Server.
A JDBC application module can also be deployed as a stand-alone resource using the weblogic.Deployer utility or the Administration Console, in which case the resource is typically available to the server or cluster targeted during the deployment process. JDBC resources deployed in this manner are called stand-along modules and can be reconfigured using the Administration Console or a JSR-88 compliant tool, but are unavailable through JMX or WLST.
Stand-alone JDBC modules promote sharing and portability of JDBC resources. You can create a data source configuration and distribute it to other developers. Stand-alone JDBC modules can also be used to move JDBC configuration between domains, such as between the development domain and the staging domain.
For more information about JDBC application modules, see Configuring JDBC Application Modules for Deployment.
For information about deploying stand-alone JDBC modules, see "Deploying JDBC and JMS Application Modules."
All WebLogic JDBC module files must end with the -jdbc.xml suffix, such as examples-demo-jdbc.xml. WebLogic Server checks the file name when you deploy the module. If the file does not end in -jdbc.xml, the deployment will fail and the server will not boot.
When you use production redeployment (versioning) to deploy a version of an application that includes a packaged JDBC module, WebLogic Server identifies the data source defined in the JDBC module with a name in the following format:
If transactions in a retiring version of an application time out and the version of the application is then undeployed, you may have to manually resolve any pending or incomplete transactions on the data source in the retired version of the application. After a data source is undeployed (in this case, with the retired version of the application), the WebLogic Server transaction manager cannot recover pending or incomplete transactions.
In support of the new modular deployment model for JDBC resources in WebLogic Server 9.0, BEA now provides a schema for WebLogic JDBC objects: weblogic-jdbc.xsd. When you create JDBC resource modules (descriptors), the modules must conform to the schema. IDEs and other tools can validate JDBC resource modules based on the schema.
The schema is available at http://www.bea.com/ns/weblogic/90/weblogic-jdbc.xsd.
When you create JDBC resources using the Administration Console or WLST, WebLogic Server creates MBeans (Managed Beans) for each of the resources. You can then access these MBeans using JMX or the WebLogic Scripting Tool (WLST). See Developing Custom Management Utilities with JMX and WebLogic Scripting Tool for more information.
Figure 2-4 shows the hierarchy of the MBeans for JDBC objects in a WebLogic domain.
The JDBCSystemResourceMBean is a container for the JavaBeans created from a data source module. However, all JMX access for a JDBC data source is through the JDBCSystemResourceMBean. You cannot directly access the individual JavaBeans created from the data source module.
In this release, WebLogic Server JDBC supports JSR-77, which defines the J2EE Management Model. The J2EE Management Model is used for monitoring the runtime state of a J2EE Web application server and its resources. You can access the J2EE Management Model to monitor resources, including the WebLogic JDBC system as a whole, JDBC drivers loaded into memory, and JDBC data sources.
JDBCServiceRuntimeMBean—Which represents the JDBC subsystem and provides methods to access the list of JDBCDriverRuntimeMBeans and JDBCDataSourceRuntimeMBeans currently available in the system.JDBCDriverRuntimeMBean—Which represents a JDBC driver that the server loaded into memory. JDBCDataSourceRuntimeMBeans—Which represents a JDBC data source deployed on a server or cluster.
For more information about using the J2EE management model with WebLogic Server, see Monitoring and Managing with the J2EE Management APIs.
#----------------------------------------------------------------------
# Create JDBC
# The prefix specifies the prefix on property names.
# Example: for property "mypool.Name=mypool", the prefix would be "mypool."
#----------------------------------------------------------------------
import sys
from java.lang import System
print "@@@ Starting the script ..."
global props
url = sys.argv[1]
usr = sys.argv[2]
password = sys.argv[3]
connect(usr,password, url)
edit()
startEdit()
servermb=getMBean("Servers/examplesServer")
if servermb is None:
print '@@@ No server MBean found'
else:
def addJDBC(prefix):
print("")
print("*** Creating JDBC with property prefix " + prefix)
# Create the Connection Pool. The system resource will have
# generated name of <PoolName>+"-jdbc"
myResourceName = props.getProperty(prefix+"PoolName")
print("Here is the Resource Name: " + myResourceName)
jdbcSystemResource = wl.create(myResourceName,"JDBCSystemResource")
myFile = jdbcSystemResource.getDescriptorFileName()
print ("HERE IS THE JDBC FILE NAME: " + myFile)
jdbcResource = jdbcSystemResource.getJDBCResource()
jdbcResource.setName(props.getProperty(prefix+"PoolName"))
# Create the DataSource Params
dpBean = jdbcResource.getJDBCDataSourceParams()
myName=props.getProperty(prefix+"JNDIName")
dpBean.setJNDINames([myName])
# Create the Driver Params
drBean = jdbcResource.getJDBCDriverParams()
drBean.setPassword(props.getProperty(prefix+"Password"))
drBean.setUrl(props.getProperty(prefix+"URLName"))
drBean.setDriverName(props.getProperty(prefix+"DriverName"))
propBean = drBean.getProperties()
driverProps = Properties()
driverProps.setProperty("user",props.getProperty(prefix+"UserName"))
e = driverProps.propertyNames()
while e.hasMoreElements() :
propName = e.nextElement()
myBean = propBean.createProperty(propName)
myBean.setValue(driverProps.getProperty(propName))
# Create the ConnectionPool Params
ppBean = jdbcResource.getJDBCConnectionPoolParams()
ppBean.setInitialCapacity(int(props.getProperty(prefix+"InitialCapacity")))
ppBean.setMaxCapacity(int(props.getProperty(prefix+"MaxCapacity")))
ppBean.setCapacityIncrement(int(props.getProperty(prefix+"CapacityIncrement")))
if not props.getProperty(prefix+"ShrinkPeriodMinutes") == None:
ppBean.setShrinkFrequencySeconds(int(props.getProperty(prefix+"ShrinkPeriodMinutes")))
if not props.getProperty(prefix+"TestTableName") == None:
ppBean.setTestTableName(props.getProperty(prefix+"TestTableName"))
if not props.getProperty(prefix+"LoginDelaySeconds") == None:
ppBean.setLoginDelaySeconds(int(props.getProperty(prefix+"LoginDelaySeconds")))
# Adding KeepXaConnTillTxComplete to help with in-doubt transactions.
xaParams = jdbcResource.getJDBCXAParams()
xaParams.setKeepXaConnTillTxComplete(1)
# Add Target
jdbcSystemResource.addTarget(wl.getMBean("/Servers/examplesServer"))
.
.
.
For more information, see Navigating and Editing MBeans in the WebLogic Scripting Tool.
You can target or deploy JDBC resources to a cluster to improve the availability of cluster-hosted applications. For information about JDBC objects in a clustered environment, see "JDBC Connections" in Using WebLogic Server Clusters.
Multi data sources are supported for use in clusters. However, note that multi data sources can only use data sources in the same JVM. Multi data sources cannot use data sources from other cluster members.
|
I was recently hunting down a slightly annoying usability bug in Khweeteur, a Twitter / identi.ca client: Khweeteur can notify the user when there are new status updates, however, it wasn't overlaying the notification window on the application window, like the email client does. I spent some time investigating the problem: the fix is easy, but non-obvious, so I'm recording it here.
A notification window overlays the window whose WM_CLASSproperty matches the specified desktop entry (and is correctlyconfigured in/etc/hildon-desktop/notification-groups.conf). Khweeteur was doingthe following:
import dbus
bus = dbus.SystemBus()
notify = bus.get_object('org.freedesktop.Notifications',
'/org/freedesktop/Notifications')
iface = dbus.Interface(notify, 'org.freedesktop.Notifications')
id = 0
msg = 'New tweets'
count = 1
amount = 1
id = iface.Notify(
'khweeteur',
id,
'khweeteur',
msg,
msg,
['default', 'call'],
{
'category': 'khweeteur-new-tweets',
'desktop-entry': 'khweeteur',
'dbus-callback-default'
: 'net.khertan.khweeteur /net/khertan/khweeteur net.khertan.khweeteur show_now',
'count': count,
'amount': count,
},
-1,
)
This means that the notification will overlay the window whoseWM_CLASS property is khweeteur. The next step was to figure outwhether Khweeteur's WM_CLASS property was indeed set to khweeteur:
$ xwininfo -root -all | grep Khweeteur
0x3e0000d "Khweeteur: Home": ("__init__.py" "__init__.py") 800x424+0+56 +0+56
^ Window id ^ WM_CLASS (class, instance)
$ xprop -id 0x3e0000d | grep WM_CLASS
WM_CLASS(STRING) = "__init__.py", "__init__.py"
Ouch! It appears that a program's WM_CLASS is set to the name of its "binary". In this case, /usr/bin/khweeteur was just a dispatcher that executes the right command depending on the arguments. When starting the frontend, it was running a Python interpreter. Adjusting the dispatcher to not exec fixed the problem:
$ xwininfo -root -all | grep Khweeteur
0x3e00014 "khweeteur": ("khweeteur" "Khweeteur") 400x192+0+0 +0+0
0x3e0000d "Khweeteur: Home": ("khweeteur" "Khweeteur") 800x424+0+56 +0+56
|
i'm dealing with HTTPS and i want to get HTTP header for live.com
import urllib2
try:
email="HelloWorld1234560@hotmail.com"
response = urllib2.urlopen("https://signup.live.com/checkavail.aspx?chkavail="+email+"&tk=1258056184535&ru=http%3a%2f%2fmail.live.com%2f%3frru%3dinbox&wa=wsignin1.0&rpsnv=11&ct=1258055283&rver=6.0.5285.0&wp=MBI&wreply=http:%2F%2Fmail.live.com%2Fdefault.aspx&lc=1036&id=64855&bk=1258055288&rollrs=12&lic=1")
print 'response headers: "%s"' % response.info()
except IOError, e:
if hasattr(e, 'code'): # HTTPError
print 'http error code: ', e.code
elif hasattr(e, 'reason'): # URLError
print "can't connect, reason: ", e.reason
else:
raise
so i don't want all the information from headers i just want Set-Cookie information
if you asking what is script do : it's for checking if email avilable to use in hotmail by get the amount from this viralbe CheckAvail=
after edit
thanks for help .. after fixing get only Set-Cookie i got problem it's when i get cookie not get CheckAvil= i got a lot information without `CheckAvil= after open it in browser and open the source i got it !! see the picture
|
i need to write a function which receives a long string, and puts into a dictionaryeach letter, and it's it's appearance frequency in the string.iv'e written the next function, but the problem it doesn't ignore whitespaces, numbers etc..iv'e been asked to use the function symbol in string.ascii_lowercase, but iv'e no idea how to do it. this is my code:
def calc_freq(txt):
dic={}
for letter in range(len(txt)):
if dic.has_key(txt[letter])==True:
dic[txt[letter]] += 1
else:
dic[txt[letter]] = 1
return dic
thanks for any help.
|
You could spawn a thread to do the processing. It wouldn't really have much to do with Django; the view function would need to kick off the worker thread and that's it.
If you really want a separate process, you'll need the subprocess module. But do you really need to redirect standard I/O or allow external process control?
Example:
from threading import Thread
from MySlowThing import SlowProcessingFunction # or whatever you call it
# ...
Thread(target=SlowProcessingFunction, args=(), kwargs={}).start()
I haven't done a program where I didn't want to track the threads' progress, so I don't know if this works without storing the Thread object somewhere. If you need to do that, it's pretty simple:
allThreads = []
# ...
global allThreads
thread = Thread(target=SlowProcessingFunction, args=(), kwargs={})
thread.start()
allThreads.append(thread)
You can remove threads from the list when thread.is_alive() returns False:
def cull_threads():
global allThreads
allThreads = [thread for thread in allThreads if thread.is_alive()]
|
I have a file that is a list followed by several numbers (eg. Name 10 20 30). I need to extract the numbers from each line and use them to calculate the average of those numbers and reprint the names, followed by the averages, line by line. How do I extract the numbers from the line and use them in calculations in Python?
If you have a file that looks like this:
you can extract the numbers using the
info = file("info.txt").read()
info = info.split("\n")
avarage=0
count=0
for item in info:
if item.isdigit():
count=count+1
avarage=avarage+int(item)
print avarage/count
The result is 51
|
Copyright © 2004-2005 Henrik Brix Andersen
Revision History
Revision 0.2.1 2005-04-08 HBA
Added link to the pppd utility
Revision 0.2.0 2005-04-07 HBA
Major rewrite
Revision 0.1.4 2004-09-10 HBA
Updated email address
Revision 0.1.3 2004-05-07 HBA
Revision 0.1.2 2004-05-07 HBA
Revision 0.1.1 2004-05-04 HBA
Boosted USB connection speed to 460800 baud
Revision 0.1.0 2004-05-04 HBA
Initial revision
Abstract
This document describes the process of getting a Motorola A920 cell phone to work with GNU/Linux. So far this document covers how to synchronize the contacts, calendar and tasks through the USB cable using Multisync.
The Docbook XML source of this document is also available.
Table of Contents
I own a Motorola A920 cell phone. The phone comes with a USB cable and software to synchronize it's contacts, calendar and tasks to a PC. Unfortunately the software supplied only works with Microsoft Windows, so you're on your own if you're using another operating system, say for instance GNU/Linux.
Fortunately I've managed to get the A920 to work with GNU/Linux as well. Read on for all the juicy details.
This document is copyrighted © 2004-2005 by Henrik Brix Andersen. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html.
No liability for the contents of this document can be accepted. Use the concepts, examples and information at your own risk. There may be errors and inaccuracies, that could be damaging to your system. Proceed with caution, and although this is highly unlikely, the author(s) do not take any responsibility.
All copyrights are held by their by their respective owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Naming of particular products or brands should not be seen as endorsements.
Feedback is most certainly welcome for this document. Send your additions, comments and criticisms to the following email address: <henrik@brixandersen.dk>.
The Motorola A920 is a 3G cellular phone with integrated PDA (or was it the other way around?). It is based on the Symbian operating system and includes lots of nice features such as an Assisted GPS (A-GPS) and an MP3 player. It is very similar in function and design to the Motorola A925, the Motorola A1000 and the Motorola A1010, and the instructions found in this document should work with either of those models.
The A920 needs to be able to resolve the address wsockhost.mrouter when connected to the PC. Otherwise it will drop the connection after approximately 90 seconds. To allow the A920 to perform the DNS lookup we need a DNS server running on the PC. The following instructions apply to the dnsmasq DNS server.
Configuration is simple. Add the contents of Figure 1 to /etc/hosts and start dnsmasqd.
You should verify the DNS server configuration as shown in Figure 2. Of course, any utility for performing DNS lookups will do, I've only used the host command as example since most GNU/Linux distributions ship with it by default.
#host wsockhost.mrouter 127.0.0.1Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: wsockhost.mrouter has address 169.254.1.68
Figure 2. Verifying the DNS server configuration
The A920 comes with an USB cable for connecting it to a PC. This section describes how to set up the connection between the PC and the A920 using the USB cable.
To connect to the A920 to your PC using the USB cable you need the kernel options listed below. You will also need the user-space pppd utility.
PPP (point-to-point protocol) support (CONFIG_PPP)
PPP support for async serial ports (CONFIG_PPP_ASYNC)
Support for Host-side USB (CONFIG_USB)
USB Modem (CDC ACM) support (CONFIG_USB_ACM)
To get pppd to establish a local connection to the A920 you need to add the contents of Figure 3 to /etc/ppp/peers/A920-USB-local.
/dev/ttyACM0 460800 crtscts local lock noauth passive nomagic ms-dns 169.254.1.68 169.254.1.68:169.254.1.1
Figure 3. /etc/ppp/peers/A920-USB-local
You should now be able to establish a local connection from the PC to the A920 as shown in Figure 4. Don't forget to initialize the Desktop Suite on the phone as well. The Desktop Suite on the A920 should be configured to establish link using USB.
# pppd call A920-USB-local nodetach
Using interface ppp0
Connect: ppp0 <--> /dev/ttyACM0
local IP address 169.254.1.68
remote IP address 169.254.1.1
Figure 4. Establishing a connection to the A920
Check the output to see if the connection was successful and verify that the A920 recognizes the connection to the PC. You should also verify the connection by pinging the A920 from the PC as shown in Figure 5.
# ping -c 3 a920
PING a920 (169.254.1.1) 56(84) bytes of data.
64 bytes from a920 (169.254.1.1): icmp_seq=1 ttl=69 time=8.73 ms
64 bytes from a920 (169.254.1.1): icmp_seq=2 ttl=69 time=7.56 ms
64 bytes from a920 (169.254.1.1): icmp_seq=3 ttl=69 time=8.75 ms
--- a920 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 7.564/8.348/8.750/0.559 ms
Figure 5. Verifying the connection using ping
Your A920 might need a software upgrade for the infrared port to work. If the infrared port settings are not listed in the control panel, contact your service provider for a software upgrade.
To connect to the A920 to your PC using the infrared port you need the kernel options listed below. You will also need the user-space pppd utility.
PPP (point-to-point protocol) support (CONFIG_PPP)
PPP support for async serial ports (CONFIG_PPP_ASYNC)
IrDA (infrared) subsystem support (CONFIG_IRDA)
IrCOMM protocol (CONFIG_IRCOMM)
Cache last LSAP (CONFIG_IRDA_CACHE_LAST_LSAP)
Fast RRs (low latency) (CONFIG_IRDA_FAST_RR)
You will also need to enable the specific device driver for your PC's infrared port. The following example uses the NSC PC87108/PC87338 device driver as this is the one needed by my IBM ThinkPad X31.
NSC PC87108/PC87338 (CONFIG_NSC_FIR)
To have the kernel module recognize the hardware correctly I had to add the contents of Figure 6 to /etc/modules.conf.
alias irda0 nsc-ircc options nsc-ircc dongle_id=0x9
Figure 6. Infrared port related entries in /etc/modules.conf
You will need to execute the command irattach irda0 to set up the infrared port. The irattach utility is provided by the Linux IrDA Project.
To get pppd to establish a local connection to the A920 you need to add the contents of Figure 7 to /etc/ppp/peers/A920-IrDA-local.
/dev/ircomm0 115200 crtscts local lock noauth passive nomagic ms-dns 169.254.1.68 169.254.1.68:169.254.1.1
Figure 7. /etc/ppp/peers/A920-IrDA-local
You should now be able to establish a connection from the PC to the A920 as shown in Figure 8. Don't forget to initialize the Desktop Suite on the phone as well. The Desktop Suite on the A920 should be configured to establish link using Infrared.
# pppd call A920-IrDA-local nodetach
Using interface ppp0
Connect: ppp0 <--> /dev/ircomm0
local IP address 169.254.1.68
remote IP address 169.254.1.1
Figure 8. Establishing a connection to the A920
Check the output to see if the connection was successful and verify that the A920 recognizes the connection to the PC. You should also verify the connection by pinging the A920 from the PC as shown in Figure 9.
# ping -c 3 a920
PING a920 (169.254.1.1) 56(84) bytes of data.
64 bytes from a920 (169.254.1.1): icmp_seq=1 ttl=69 time=201 ms
64 bytes from a920 (169.254.1.1): icmp_seq=2 ttl=69 time=342 ms
64 bytes from a920 (169.254.1.1): icmp_seq=3 ttl=69 time=138 ms
--- a920 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 138.444/227.499/342.110/85.087 ms
Figure 9. Verifying the connection using ping
I recommend setting the synchronization pair to synchronize on changes as shown in Figure 11.
You need to set up the Multisync SyncML plug-in to accept connections through http as shown in Figure 12.
The names of the SyncML databases can be seen as shown in Figure 13. These names are needed for configuring the SyncML client on the A920.
Make sure you enable the Interpret UTC as local time option as shown in Figure 14.
I recommend setting up Multisync to automatically synchronize as shown in Figure 15.
Configure the SyncML client on the A920 using the settings shown in Figure 16.
Server address: http://169.254.1.68:5079 Username: syncml Password: ************ Transport Protocol: HTTP Use transport login: no
Figure 16. A920 SyncML client configuration
Add the synchronization tasks using the same database names as specified in the Multisync SyncML plug-in, see Figure 13.
After establishing a connection between the A920 and the PC, as shown earlier in this document, you should be able to synchronize the A920 using Multisync.
The Motorola A920 perfectly well with GNU/Linux. Given the instructions in this HOWTO it should be pretty straight forward to set up the kernel and Multisync for synchronizing as long as you have a bit of experience with GNU/Linux.
This section contains a list of hopefully helpful links to various documentation related to the A920 and/or GNU/Linux which I've collected during the the writing of this document.
Miscellaneous
P3nfs: Symbian to UNIX/Linux communication program
|
Under the Hood #2:
Internal / External Links, the CSS3 Way
If, unlike me, you donât have a sixth-sense that means you know when a link will be internal / external, or will open in a new window, it is increasingly common practice to add little images to links to show that they lead to external websites. The benefit is that one can queue up external links in background-tabs, or avoid things not interested in.
For example, external links in Wikipedia:
There are a number of things I wish to alert my users to on this website through the link scheme.
Float over each of the example links for a demonstration.
An internal link, to another page on this site
The dotted-line is used to signify a weaker hyperlink that does not break the boundary of this website.
A link to an external website
A normal hyperlink is used to represent the interlinked âWeb, going from one site to another.
The image is not shown until the mouse is placed over the link so as to reduce visual clutter and to not add stutter to the reading rhythm. The image juts out to the left, so as to not cause the text to spasm about and break the reading flow, nor is it placed to the right where it may cover up the next word, and thus also break the reading flow.
Some links to popular websites, and a redirect
Adding the favicon to external links to some sites will help the user recognise what sort of content they will be lead to. In some cases this will better help them decide if the link is useful or a waste of time, and what sort of context is meant when the link text is not descriptive of what it is.
A link to an email address, and an application protocol
Links to other protocols may cause programs on the userâs computer to launch, or may require them to copy & paste the link into a piece of software.
A direct link to a file, rather than a webpage
Links will behave differently when leading directly to a file. Users need to be made aware of this, especially if they want to avoid PDF links, or to proceed with due caution. A link to a file is a lot like an enclosure in an email; it should be distinctly marked with some icon to show its type. As a bonus, if youâre using Firefox, that icon above will be the one from your computer for that filetype.
Beginning With Good Markup
Although all of this can be done without any additional markup than just the href, the CSS would be 10Ã larger. It could be done with a set of CSS classes, but then this website has no classes, and ultimately these link effects should be zero-maintenance and automatic.
We can reduce the length of selectors needed by using a few HTML attributes that have been around for ages.
<a href="http://â¦" rel="external" />
The rel attribute defines the relationship between this page and the linked page.
In this case we are stating that the page linked to is external.
<a href="a.pdf" type="application/pdf" />
The same as when specifying a stylesheet or javascript file in the <head>, you can provide the mime-type of the content being linked to.
These are both clean and meaningful ways to markup links in a way that robots can understand, and doesnât rely upon class names that tie you to your design, and wonât work interchangeably with syndicated content on other peopleâs sites.
I could type these extra attributes manually as I write my articles, but I knew that Iâd miss one or two here and there, and Iâd prefer something a bit more automatic.
Automatic Markup With PHP
Here is some code that searches for links starting with âhttpâ and adds ârel="external"â to the tag. Internal links are relative, (e.g. âhref="?blog"â and donât contain my domain name, but the code can be easily modified to look for links that donât start with your own domain name if your CMS always writes full URLs - even to internal pages)
//add `rel="external"` to outside links:
$content = preg_replace_callback (
//this finds links that begin with a protocol, e.g. "http"
'/<a[^>]*href="(?:[a-z]+):[^"]+"[^>]*>/',
//this does the substitution, either adding a rel attribute, or appending "external" to an existing one
create_function ('$m',
'return (strpos($m[0],"rel=\"")!==false)'. //does 'rel="..."' already exist?
'?str_replace("rel=\"","rel=\"external ",$m[0])'. //insert "external" into `rel`
':str_replace("<a ","<a rel=\"external\" ",$m[0]);' //add `rel="external"`
), $content
);
The second example here, is finding links that lead directly to a file, rather than a page:
//add 'type="mime/type"' to links in the content:
$content = preg_replace_callback (
//this regex finds links to the listed file types, and adds 'type="mime/type"'
'/<a([^>]*)href="([^"]+)\.(gif|jpg|png|pdf|zip|exe)"([^>]*)>/',
//this does the insertion, recreating the link, with the added attribute
create_function ('$m',
'return "<a type=\"".mimeType($m[3])."\"${m[1]}href=\"${m[2]}.${m[3]}\"${m[4]}>";'
), $content
);
//the "mimeType" function called above, which returns a mime-type from a file-extension
function mimeType ($extension) {
switch ($extension) {
case 'gif': return 'image/gif'; break;
case 'jpg': return 'image/jpeg'; break;
case 'png': return 'image/png'; break;
case 'pdf': return 'application/pdf'; break;
case 'zip': return 'application/zip'; break;
case 'exe': return 'application/octet-stream'; break;
}
}
The CSS
Internal Links
For links that are not external⦠which is easy now that they are automatically marked up by the PHP.
(A description of the CSS3 selectors used can be found here)
a:not([rel~="external"]) {
text-decoration: none; border-bottom: dotted 1px;
}
A colour is not given on the border-bottom attribute so as to keep the existing link colour - even the userâs chosen browser link colour if the link colour has not been overridden anywhere.
Links to Files
We will cover these next, as the CSS for external links makes reference to these.
a[type] {padding: 0 5px 0 25px; text-decoration: none;
/* start with the default "unknown file-type" icon */
background: #dedede url("/design/icons/page_white.png") no-repeat 5px 50%;
/* rounded, borders. the bottom border is removed for if the link is internal */
-moz-border-radius: 4px; -webkit-border-radius: 4px; border-bottom: 0 !important;}
a[type]:hover {background-color: #eea;}
/* these icons © Mark James, <famfamfam.com/lab/icons/silk> */
a[href$=".gif"], a[href$=".jpg"], a[href$=".png"]
{background-image: url("/design/icons/page_white_picture.png");}
a[href$=".pdf"] {background-image: url("/design/icons/page_white_acrobat.png");}
a[href$=".zip"] {background-image: url("/design/icons/page_white_zip.png");}
a[href$=".exe"] {background-image: url("/design/icons/application_xp_terminal.png");}
/* Firefox users will get their own native icons from their OS.
I’m sure this can be done in Safari, but I don’t know how */
@-moz-document url-prefix() {
/* `@moz-document` isolates the following CSS for Firefox (gecko) only */
/* get the "unknown file-type" icon from the OS */
a[type] {background-image: url("moz-icon://.?size=16");}
/* and the other file type icons */
a[href$=".gif"] {background-image: url("moz-icon://.GIF?size=16");}
a[href$=".jpg"] {background-image: url("moz-icon://.JPG?size=16");}
a[href$=".png"] {background-image: url("moz-icon://.PNG?size=16");}
a[href$=".pdf"] {background-image: url("moz-icon://.PDF?size=16");}
a[href$=".zip"] {background-image: url("moz-icon://.ZIP?size=16");}
a[href$=".exe"] {background-image: url("moz-icon://.EXE?size=16");}
}
External Links
External links already have the underline as part of the defaults.
/* set the default external-link icon (this icon taken from Wikipedia) */
a[rel~="external"]:not([type]) {
background: url('/design/icons/external.png') no-repeat 0 50%;
}
/* hide the icon when not hovering on the link (whilst keeping the icon on standby)
`:not([type])` is needed to not break the file-links which already have an image */
a[rel~="external"]:not([type]):not(:hover) {
background-image: none;
}
/* when you hover over the link, jut the favicon over the left side */
a[rel~="external"]:not([type]):hover {
/* `background-color` is set to prevent text clashing with heavily transparent favicons, like Google’s */
margin-left: -18px; padding-left: 18px; background-color: #fcfcfc;
}
/* some favicons for common websites I link to.
the `:hover` is only required by Safari to prevent it from preloading these */
a[href*="apple."]:hover {background-image: url('http://apple.com/favicon.ico');}
a[href*="archive.org"]:hover {background-image: url('http://web.archive.org/favicon.ico');}
a[href*="deviantart."]:hover {background-image: url('http://i.deviantart.com/icons/favicon.png');}
a[href*="google."]:hover {background-image: url('http://google.com/favicon.ico');}
a[href*="osnews."]:hover {background-image: url('http://osnews.com/favicon.ico');}
a[href*="php.net"]:hover {background-image: url('http://static.php.net/www.php.net/favicon.ico');}
a[href*="slashdot."]:hover {background-image: url('http://slashdot.org/favicon.ico');}
a[href*="tinyurl."]:hover {background-image: url('http://tinyurl.com/favicon.ico');}
a[href*="wikipedia."]:hover {background-image: url('http://en.wikipedia.org/favicon.ico');}
a[href*="youtube."]:hover {background-image: url('http://s.ytimg.com/yt/favicon-vfl1123.ico');}
/* icons for other protocols */
a[href^="mailto:"]:hover {background-image: url('/design/icons/email.png');}
a[href^="itms:"]:hover {background-image: url('/design/icons/itms.png');}
Enjoy.
Limitations
Requires the :not CSS3 selector, available in Firefox, Safari & Opera 9.5.
As you can imagine, this does not work in Internet Explorer. But then as you know, I donât care.
|
Does any one know of a function/idiom (in any language) that takes a set and returns two or more subsets, determined by one or more predicates?
It is easy to do this in an imperative style e.g:
a = b = []
for x in range(10):
if even(x):
a.append(x)
else:
b.append(x)
or slightly better:
[even(x) and a.append(x) or b.append(x) for x in range(10)]
Since a set comprehension returns a single list based upon a single predicate (and it effectively just a map) I think there ought to be something that splits the input into 2 or more bins based on either a binary predicate or multiple predicates.
The neatest syntax I can come up with is:
>> def partition(iterable, *functions):
>> return [filter(f,iterable) for f in functions]
>> partition(range(10), lambda x: bool(x%2), lambda x: x == 2)
[[1, 3, 5, 7, 9], [2]]
|
I have a coordinated storage list in python A[row,col,value] for storing non-zeros values.
How can I get the list of all the row indexes? I expected this A[0:][0] to work as print A[0:] prints the whole list but print A[0:][0] only prints A[0].
The reason I ask is for efficient calculation of the number of non-zero values in each row i.e iterating over range(0,n) where n is the total number of rows. This should be much cheaper than my current way of for i in range(0,n): for j in A: ....
Something like:
c = []
# for the total number of rows
for i in range(0,n):
# get number of rows with only one entry in coordinate storage list
if A[0:][0].count(i) == 1: c.append(i)
return c
Over:
c = []
# for the total number of rows
for i in range(0,n):
# get the index and initialize the count to 0
c.append([i,0])
# for every entry in coordinate storage list
for j in A:
# if row index (A[:][0]) is equal to current row i, increment count
if j[0] == i:
c[i][1]+=1
return c
EDIT:
Using Junuxx's answer, this question and this post I came up with the following (for returning the number of singleton rows) which is much faster for my current problems size of A than my original attempt. However it still grows with the number of rows and columns. I wonder if it's possible to not have to iterate over A but just upto n?
# get total list of row indexes from coordinate storage list
row_indexes = [i[0] for i in A]
# create dictionary {index:count}
c = Counter(row_indexes)
# return only value where count == 1
return [c[0] for c in c.items() if c[1] == 1]
|
The most tricky part coding the callback mechanism was to implement the function serialization.
Here is my function serialization class to produce a JSON string:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from hashlib import md5
import marshal
import json
import types
class FuncMarshal:
@classmethod
def md5sum(cls, serialized):
hasher = md5()
hasher.update(serialized)
return hasher.hexdigest()
@classmethod
def serialize(cls, function):
if callable(function):
serialized = json.dumps((
marshal.dumps(function.func_code),
function.func_name,
function.func_defaults)
)
return (cls.md5sum(serialized), serialized)
@classmethod
def deserialize(cls, encoded):
(code, name, defaults) = json.loads(encoded)
code = marshal.loads(code)
return (cls.md5sum(encoded), \
types.FunctionType(code, globals=globals(), name=str(name), \
argdefs=defaults))
I also added a md5sum of the resulting object for validation but this part is not really needed.
Please let me know if you have a better approach for do this kind of serialization.
|
Darel
[scripts] - 4 petits scripts fait maison !
Salut !
Quand je me fais CH**R sur mon Tux, je bidouille un peu.
Résultat, 3 petits scripts pour nautilus (http://doc.ubuntu-fr.org/nautilus_scripts) !
----------------------------------------- NAUTILUS SCRIPTS -----------------------------------------
1) COMPRESSER UNE IMAGE
Il sert à compresser une image pour l'envoyer sur Internet grâce à Imagemagick (http://doc.ubuntu-fr.org/imagemagick).
Le tout en interface graphique avec zenity, et en seulement 2 étapes.
2) OUVRIR LE DOSSIER EN TANT QUE "ROOT"
Là, rien de très original, petit script reprit à ma sauce.
3) OUVRIR UN TERMINAL ICI
Idem que le 2.
----------------------------------------- SCRIPT -----------------------------------------
1) FOND D'ÉCRAN ALEATOIRE
Un petit script bash qui peut être lancé au démarrage en l'ajoutant aux SESSIONS (Système > Préférences > Sessions), ou tous simplement en le lançant normalement.
Les fond d'écrans doivent être au format JPEG (*.jpg) BMP (*.bmp) ou PNG (*.png), ne pas comporter d'espaces dans les noms de fichier et être placer dans le dossier "wallpers" situé dans votre dossier personnel.
Pour que le script fonctionne, remplacez à la ligne 12 du script "user" par votre nom d'utilisateur.
Pour les télécharger: http://404upload.fr/fichier-0135056001206355676.html
Voilà, si vous avez d'autres idées ou des améliorations sympathiques en tête...
Dernière modification par Darel (Le 24/03/2008, à 12:53)
Quand la société serre les fesses, les espaces de liberté individuelle rétrécissent.
Roland Topor.
Hors ligne
beinuo21
Re : [scripts] - 4 petits scripts fait maison !
J'ai testé le script pour redimensionner une image, c'est très pratique! -->adopté
Merki!;)
Sony VAIO VGN-FZ21E : Core2duo T7250 - 2GbRam - 32Gb SSD Samsung SLC - nVidia 8400GS (éternel problème d'écran noir au démarrage, please help!)
Hors ligne
Darel
Re : [scripts] - 4 petits scripts fait maison !
Merci .
Le seul problème c'est qu'il ne peux traiter qu'une seule image à la fois.
(La flemme, pour faire sa sur plusieurs images, XnView existe )
Dernière modification par Darel (Le 24/03/2008, à 12:57)
Quand la société serre les fesses, les espaces de liberté individuelle rétrécissent.
Roland Topor.
Hors ligne
MrKikkeli
Re : [scripts] - 4 petits scripts fait maison !
Dans la veine du fond d'écran aléatoire, j'ai développé quelques scripts qui récupèrent des images d'une webcam pour en faire mon fond d'écran. C'est une alternative à l'utilisation de xwinwrap (ne marche pas sur mon eee pc) ou à l'utilisation de mplayer avec l'option rootwin (permet d'avoir une vidéo en fond d'écran, mais désactive la prise en charge graphique du bureau par gnome et nautilus).
Voici le script général (exemple pour une caméra pointant sur shibuya, un quartier animé de Tôkyô):
#!/bin/sh
# Static webcam as a wallpaper in gnome v 0.20080418
# Script by MrKikkeli -04.18.08
# we kill first any other static webcam wallpaper process that is currently running,
# otherwhise there will be some mean concurrency
# Make sure all your scripts' names end with webcamwallpaper.sh
for process_to_kill in `ps -ef | grep webcamwallpaper.sh | grep -v $$ | grep -v grep | awk '{print $2}'`; do
kill -1 ${process_to_kill} >> /dev/null;
done
# body
while [ 1 ]; do
rm -f /tmp/bg-*.jpg;
IMAGE=/tmp/bg-`date +%s`.jpg;
nice wget http://shibuya02.ipcam.jp/SnapshotJPEG --post-data 'Resolution=160x120&Quality=High' -O $IMAGE;
gconftool -t string -s /desktop/gnome/background/picture_filename $IMAGE;
sleep 1;
done;
La première partie du script vérifie qu'aucun autre script du même type n'est en train de tourner (sinon votre fond d'écran va "clignoter"), ce qui permet de lancer vos scripts comme vous voulez. Bien sûr cela ne marche qu'à condition de nommer tous vos scripts de webcam "blablabla-webcamwallpaper.sh" .
Pour adapter le script à vos propres webcams, il suffit de mettre l'URL de la prise de vue après wget, et éventuellement les arguments POST après "post-data"; et de changer le paramètre derrière "sleep" en fonction du taux de rafraîchissement de la webcam.
Autre exemple d'application : le fond d'écran qui montre la Terre vue de l'espace en temps réel (ou presque, personne n'a encore hacké les satellites militaires)
#!/bin/sh
# Static webcam as a wallpaper in gnome v 0.20080418
# Script by MrKikkeli -04.18.08
# we kill first any other static webcam wallpaper process that is currently running,
# otherwhise there will be some mean concurrency
# Make sure all your scripts' names end with webcamwallpaper.sh
for process_to_kill in `ps -ef | grep webcamwallpaper.sh | grep -v $$ | grep -v grep | awk '{print $2}'`; do
kill -1 ${process_to_kill} >> /dev/null;
done
# body
while [ 1 ]; do
rm -f /tmp/bg-*jpg;
IMAGE=/tmp/bg-`date +%s`.jpg;
nice xplanet -num_times 1 -output $IMAGE -geometry 800x600 -longitude 2 -latitude 48;
gconftool -t string -s /desktop/gnome/background/picture_filename $IMAGE;
sleep 300
done;
Changez la résolution (argument de geometry) et les coordonnées du point de vue (là, ça pointe sur Paris grosso modo) selon votre convenance.
Et pour fêter l'arrivée de Hardy, j'ajoute un script qui choisit aléatoirement un script à faire tourner, si comme moi vous vous retrouvez avec des centaines de cams et ne savez plus que choisir ...
#!/bin/sh
# Create an array of the files
files=(`ls *webcamwallpaper.sh`)
# Get the size of the array
N=${#files[@]}
while [ 1 ]; do
# Select a random number between this range
((M=RANDOM%N))
# Get the name of this file
randomfile=`echo ${files[$M]}`
sh $randomfile &
sleep 3600
done;
Attention, sans que je sache trop pourquoi, ce script ne s'exécute correctement qu'invoqué avec bash, et non sh.
Dernière modification par MrKikkeli (Le 24/04/2008, à 19:12)
Hors ligne
MrKikkeli
Re : [scripts] - 4 petits scripts fait maison !
Sous hardy, mes scripts ne marchent plus Le fond d'écran devient vide avant chaque mise à jour de l'image. Quelqu'un voit pourquoi ?
Hors ligne
MrKikkeli
Re : [scripts] - 4 petits scripts fait maison !
up
Hors ligne
Micnight
Re : [scripts] - 4 petits scripts fait maison !
Désolé je ne vois pas de réponse pour ton problème par contre pour un post un peu plus haut où il ne savait pas comment faire pour redimensionner plusieurs image d'un coup, un traitement par lot.
Voilà un de mes script qui le fait... à utiliser avec nautilus-actions et image-magick
comme option dans nautilus-actions je met:
chemin : chemin vers le script
paramètres : %d %m
voici le script:
#! /bin/bash
IFS=$'\n'
cd $1
shift
for i in $*
do
#applique la modification de taille
convert $i -geometry 640x640 -density 150x150 $i
done
Facile à transformer pour d'autre effet que permet image magick comme inverser une image:
convert $i -flop $i
Voilà désolé j'espère que le fait de remonter ton post en même temps va t'aider
Dernière modification par Micnight (Le 29/04/2008, à 18:37)
Hors ligne
MrKikkeli
Re : [scripts] - 4 petits scripts fait maison !
J'ai fait quelques recherches et j'ai fini par trouver l'origine du problème. C'est en fait un bug de gnome qui a été corrigé : le gestionnaire de papier peint ne détectait pas si le fichier du papier peint en cours changeait ...
Résultat, depuis cette correction, fatalement, l'effacement de mon fichier, bien que rapide, était détecté et se traduisait par ce petit "clignotement".
Du coup, il suffit de créer un fichier tampon pour résoudre le pb. Comme j'avais pas envie de modifier à la main mes 80 (!!) scripts shell, j'ai décidé d'en faire un gros et unique script python. Voilà le code :
#! /usr/bin/env python
# -*- coding: utf-8 -*-
##---------------------------------------------------
## Static webcam as a wallpaper in gnome v 0.20080430
## Script by MrKikkeli - 04.30.08
## Turned to Python because it's awesome. :)
##---------------------------------------------------
##---------------------------------------------------
## To do :
##
## - display through a discreet notification which webcam we are watching
## - turn it into a screenlet
## - integrate the xplanet background
##
##---------------------------------------------------
import os, random, urllib, time, sys
##options are :
## *Resolution would be 160x120 | 320x240 | 640x480
## *Quality can be 'Clarity', 'Standard' or 'Motion'
snapshotOption = {'Resolution': '640x480', 'Quality': 'Clarity'}
webcam_list = [["http://60.33.165.138:5080/snapshotJpeg",snapshotOption, u"LaundryMat", 0],
["http://cam30522.miemasu.net/snapshotJpeg", snapshotOption, u"Tennis Court", 0],
["http://kstc.miemasu.net/snapshotJpeg", snapshotOption, u"Tennis Court", 0],
["http://82308207.tel.netvolante.jp:8001/snapshotJpeg", snapshotOption, u"Some Appartment", 0],
["http://65.13.81.233/snapshotJpeg", snapshotOption, u"Some beach", 0],
["http://128.118.52.239/axis-cgi/jpg/image.cgi", {}, u"IST", 0],
["http://128.252.39.99/axis-cgi/jpg/image.cgi", {}, u"Some building", 0],
["http://napoliwebcam.dnsalias.com/record/current.jpg", {}, u"Napoli", 0],
["http://195.243.185.195/axis-cgi/jpg/image.cgi", {}, u"Stuttgart Airport", 0],
["http://images.ibsys.com/orl/images/weather/auto/daytonacam_640x480.jpg", {}, u"Daytona Beach", 20],
["http://shibuya02.ipcam.jp/SnapshotJPEG", snapshotOption, u"Shibuya", 0],
["http://www.acropolis.gr/webcam/acropolis.jpg", {}, u"acropolis", 60],
["http://213.253.80.123/still.jpg", {}, u"some airport", 0],
["http://www.westphalfamily.com/webcam.jpg", {}, u"altadena", 60],
["http://www.borealisbroadband.net/sheraton/sheraton1.jpg", {}, u"anchorage sheraton hotel", 10],
["http://192.102.150.10/record/current.jpg", {}, u"aquarium marina oberhausen", 0],
["http://142.22.58.150/axis-cgi/jpg/image.cgi", {}, u"aquarium", 0],
["http://125.206.34.118/SnapshotJPEG", snapshotOption, u"asakusa", 0],
["http://cam6075917.miemasu.net:50006/SnapshotJPEG", snapshotOption, u"barn", 0],
["http://biberstein.viewnetcam.com:50000/SnapshotJPEG", snapshotOption, u"Bibi's Webcam", 0],
["http://142.36.244.87:8888/SnapshotJPEG", snapshotOption, u"O Canada", 0],
["http://88.38.50.59/SnapshotJPEG", snapshotOption, u"Coast", 0],
["http://www.gotostjohn.com/live/cruzbay.jpg", {}, u"cruz bay", 30],
["http://dake.miemasu.net/snapshotJpeg", snapshotOption, u"dake Ryokan", 0],
["http://63.175.189.41/axis-cgi/jpg/image.cgi", {}, u'Deadland', 0],
["http://www.parislive.net/eiffelcam3.jpg", {}, u'Tour Eiffel', 10],
["http://www.parislive.net/eiffelwebcam1.jpg", {}, u'Tour Eiffel', 10],
["http://webmarin.com/images/wc/Camera.jpg", {}, u'Frisco Bay', 10],
["http://castrocam.net/castrocam.jpg", {}, u'Frisco Skyline',60],
["http://www.stefanome.it/current_lev.jpg", {}, u'Genova', 300],
["http://camgodovic.drsc.si/axis-cgi/jpg/image.cgi",{}, u'Godovic', 0],
["http://gunnarbu.axiscam.net/axis-cgi/jpg/image.cgi", {}, u'Gunnarbu', 0],
["http://www.ek.fi/kamera/tn_palace00.jpg", {}, u'Helsingin Tori', 0],
["http://71.254.156.56:8000/axis-cgi/jpg/image.cgi", {}, u'Hermosawave', 0],
["http://hih1.dyndns.org:81/record/current.jpg", {}, u'Hih1', 0],
["http://82.208.151.76:8000/record/current.jpg", {}, u'hotel unirea Romania',0],
["http://webcam.mmhk.cz/axis-cgi/jpg/image.cgi", {}, u'Hradek Kralove', 0],
["http://202.213.247.128/nphMotionJpeg/SnapshotJPEG", snapshotOption, u'Japanese Street', 0],
["http://211.18.192.147/nphMotionJpeg/SnapshotJPEG", snapshotOption, u'Japanese Studio', 0],
["http://www.shokoku-ji.or.jp/kinkakuji/webcam/fullsize.jpg", snapshotOption, u'Kinkakuji', 0],
["http://kohama2.miemasu.net:50000/SnapshotJPEG", snapshotOption, u'Kohama', 0],
["http://213.28.111.12/record/current.jpg", {}, u'Levi Ski station', 0],
["http://www.locogringo.com/Upload/netcam.jpg", {}, u'locogringo', 10],
["http://www.rovaniemi.fi/images/webcam/Kamera4_00001.jpg", {}, u'Lordin Aukio - Rovaniemi', 0],
["http://lovefm.miemasu.net:60002/SnapshotJPEG", snapshotOption, u'Love FM', 0],
["http://mainecam.dyndns.org:50004/SnapshotJPEG", snapshotOption, u'Maine Cam', 0],
["http://miyanoura.miemasu.net:60001/SnapshotJPEG", snapshotOption, u'Miya no Ura', 0],
["http://iozoonc5.city.miyazaki.miyazaki.jp/snapshotJPEG", snapshotOption, u'Firefoxes from the Miyazaki Zoo', 0],
["http://murolucano.dnsalias.com/jpg/image.jpg", {}, u'MuroLucano', 0],
["http://www.santaclauslive.com/cam/cam.jpg", {}, u'Napapiiri', 0],
["http://napoliwebcam.dnsalias.com/record/current.jpg", {}, u'Napoli', 300],
["http://221.251.109.90:84/SnapshotJPEG", snapshotOption, u'Neko Baba', 0],
["http://84.53.63.18:8000/axis-cgi/jpg/image.cgi", {}, u'nesna_botrorening', 0],
["http://www.wirednewyork.com/webcam2/wirednewyork2.jpg", {}, u'New York Empire State Building', 30],
["http://livesite.hongwanji.or.jp/camera/shirasu1.jpg", {}, u'Nishi HonganJi', 10],
["http://noshiro-ekimae.miemasu.net:92/snapshotJpeg", snapshotOption, u'Noshiro Ekimae', 0],
["http://ocean1cam-2.viewnetcam.com:81/SnapshotJPEG", snapshotOption, u'Ocean View', 0],
["http://opccam2.ohsu.edu/axis-cgi/jpg/image.cgi", {}, u'OPC Bridge', 0],
["http://www.slednh.com/webcam/netcam.jpg", {}, u'Ossipee Lake', 4],
["http://webkamera.overtornea.se/axis-cgi/jpg/image.cgi", {}, u'Overtornea', 0],
["http://webcam.ville.woob2.com/Pantheon_full.jpg", {}, u'Panthéon', 0],
["http://24.227.114.58/axis-cgi/jpg/image.cgi", {}, u'Perrys Ocean Edge', 0],
["http://69.57.245.115/axis-cgi/jpg/image.cgi", {}, u'Pineapple Beach', 0],
["http://dokumenty.prague-city.cz/camera/fullsize.jpg", {}, u'Prague Old Town', 0],
["http://civl3104acam1.ecn.purdue.edu/axis-cgi/jpg/image.cgi", {}, u'Purdue Armstrong Hall', 0],
["http://webcam.sewanee.edu/axis-cgi/jpg/image.cgi", {}, u'Quadcam Sewanee', 0],
["http://217.155.209.14:2220/SnapshotJPEG", snapshotOption, u'Random Street', 0],
["http://133.5.31.7/axis-cgi/jpg/image.cgi", {}, u'Room 134', 0],
["http://shibuya02.ipcam.jp/SnapshotJPEG", snapshotOption, u'Shibuya', 0],
["http://81.140.146.203/axis-cgi/jpg/image.cgi", {}, u'South Mainland', 0],
["http://195.243.185.195/axis-cgi/jpg/image.cgi", {}, u'Stuttgart Airport', 0],
["http://82.191.220.214/record/current.jpg", {}, u'Svincolo di Lago Negro', 0],
["http://www.bbc.co.uk/cgi-perl/webcams/camcache.pl", {'r': 120, 'h': 'mcs', 'l': 'webcams/london/548955.jpg'}, u'Swiss Cottage', 60],
["http://taosplaza.viewnetcam.com:50000/SnapshotJPEG", snapshotOption, u'Taos Plaza', 0],
["http://tezupin.ddo.jp/SnapshotJPEG", snapshotOption, u'Tezupin Hamsters', 0],
["http://207.251.86.248/cctv26.jpg", {}, u'Times Square CCTV', 0],
["http://www.bbc.co.uk/london/webcams/images/trafalgar_square.jpg", {}, u'Trafalgar Square', 0],
["http://trump.viewnetcam.com:50000/SnapshotJPEG", snapshotOption, u'Trump', 0],
["http://62.73.32.2/record/current.jpg", {}, u'Turun Tori', 0],
["http://www.serendipity.vi/images/vs.jpg", {}, u'Villa Serendipity', 10],
["http://69.146.254.227/axis-cgi/jpg/image.cgi", {}, u'VolksWagen Garage', 0],
["http://sprout.warwick.ac.uk/axis-cgi/jpg/image.cgi", {}, u'Warwick Sprout', 0],
["http://208.0.229.84/nphMotionJpeg/SnapshotJPEG", snapshotOption, u'Yacht Cam', 0],
["http://www.yosemite.org/vryos/sentinel.jpg", {}, u'Yosemite Park', 0]
]
if sys.argv[1] == '--list':
print "Webcam list :\n"
i = 1
for webcam in webcam_list:
print '%d - %s : %s' % (i, webcam[2], webcam[0])
i=i+1
exit()
elif sys.argv[1] :
try:
i = int(sys.argv[1]) - 1
webcam_list = [ webcam_list[i] ]
except:
print "Incorrect argument, integer lower than %d, '--list' or nothing expected" % len(webcam_list)
exit()
bgfile = open('/tmp/bg.jpg', 'wb+')
##---------------------------------------------------
## This loop ensures the connection to the chosen webcam is possible.
##---------------------------------------------------
while True:
chosen_one = random.choice(webcam_list)
print chosen_one[2]
try:
if chosen_one[1]:
params = urllib.urlencode(chosen_one[1])
bgfile.write(urllib.urlopen(chosen_one[0], params).read())
else:
bgfile.write(urllib.urlopen(chosen_one[0]).read())
break
except:
##---------------------------------------------------
## This is in case we have only one webcam to choose from !.
##---------------------------------------------------
if len(webcam_list) < 2 :
print "connection problem."
exit()
pass
bgfile.close()
os.system('gconftool -t string -s /desktop/gnome/background/picture_filename /tmp/bg.jpg')
## print "background changed"
##---------------------------------------------------
## Main loop
##---------------------------------------------------
while True:
bgtemp = open('/tmp/bgtmp.jpg', 'wb+')
if chosen_one[1]:
params = urllib.urlencode(chosen_one[1])
bgtemp.write(urllib.urlopen(chosen_one[0], params).read())
else:
bgtemp.write(urllib.urlopen(chosen_one[0]).read())
bgtemp.close()
os.system('cp /tmp/bgtmp.jpg /tmp/bg.jpg')
if chosen_one[3]:
time.sleep(chosen_one[3])
J'ai aussi modifié les scripts randomize et celui avec xplanet (étoiles et nuages en temps quasi-réel), si ça intéresse quelqu'un ...
Hors ligne
nordinatueur
Re : [scripts] - 4 petits scripts fait maison !
Coucou !
Je sais que ça fait un peu déterrage mais j'ai pensé à basculer entre deux images, plutôt que de faire un tampon...
J'ai donc modifié la fin du fichier de cette façon :
14 # body
15 papierpeint ()
16 {
17 IMAGE=$1
18 nice wget http://shibuya02.ipcam.jp/SnapshotJPEG --post-data 'Resolution=160x120&Quality=High' -O $IMAGE > /dev/null
19 gconftool-2 -t string -s /desktop/gnome/background/picture_filename $IMAGE;
20 sleep 0.3;
21 }
22 switch=1
23 while [ 1 ]; do
24
25 IMAGE1=/tmp/bg1.jpg;
26 IMAGE2=/tmp/bg2.jpg;
27 if [ $switch = "1" ]; then
28 # On met l'image 1 et on supprime la 2, puis on indique qu'on prendra la 2
29 papierpeint $IMAGE1
30 rm -f $IMAGE2;
31 switch=2
32 else
33 # On met l'image 2 et on supprime la 1, puis on indique qu'on prendra la 1
34 papierpeint $IMAGE2
35 rm -f $IMAGE1;
36 switch=1
37 fi
38 done;
39 rm /tmp/bg1.jpg /tmp/bg2.jpg
On pourrait même voir à mettre les deux images en argument et ne lancer que la fonction dans l'if.
En plus le sleep se fait juste avant la suppression, donc l'image reste plus longtemps en cache et ça limite les clignotements... Sinon c'est une super idée je trouve ! :-) (avec bien du retard pour moi.)
Par contre, j'ai juste un problème : Parfois le script s'arrête ... wget se gèle et plus rien n'avance (ce qui est logique) Tout ce qu'on peut faire c'est l'arrêter et le relancer.
Dernière modification par nordinatueur (Le 25/01/2010, à 17:18)
Hors ligne
titou345
Re : [scripts] - 4 petits scripts fait maison !
nordinatueur peux-tu redonner le script en entier?
Dernière modification par titou345 (Le 25/01/2010, à 23:42)
Machin a dit truc-bidule.
Bref, moi je suis cultivé quoi.
Hors ligne
nordinatueur
Re : [scripts] - 4 petits scripts fait maison !
Bon avec beaucoup de retard ... (j'étais pas abonné au sujet)
#!/bin/sh
# Static webcam as a wallpaper in gnome v 0.20080418
# Script by MrKikkeli -04.18.08
# we kill first any other static webcam wallpaper process that is currently running,
# otherwhise there will be some mean concurrency
# Make sure all your scripts' names end with webcamwallpaper.sh
for process_to_kill in `ps -ef | grep webcamwallpaper.sh | grep -v $$ | grep -v grep | awk '{print $2}'`; do
kill -1 ${process_to_kill} >> /dev/null;
done
# body
papierpeint ()
{
IMAGE=$1
nice wget http://shibuya02.ipcam.jp/SnapshotJPEG --post-data 'Resolution=160x120&Quality=High' -O $IMAGE > /dev/null
gconftool-2 -t string -s /desktop/gnome/background/picture_filename $IMAGE &
sleep 0.1;
}
switch=1;
compteur=1;
fin=0
while [[ $compteur -le $fin ]]; do
compteur=$(( $compteur + 1 ));
IMAGE1=/tmp/bg1.jpg;
IMAGE2=/tmp/bg2.jpg;
if [ $switch = "1" ]; then
# On met l'image 1 et on supprime la 2, puis on indique qu'on prendra la 2
papierpeint $IMAGE1
rm -f $IMAGE2;
switch=2
else
# On met l'image 2 et on supprime la 1, puis on indique qu'on prendra la 1
papierpeint $IMAGE2
rm -f $IMAGE1;
switch=1
fi
done;
rm /tmp/bg1.jpg /tmp/bg2.jpg
Bon j'ai du le recoller parce que je l'avais perdu... Je l'ai re-testé et il fonctionne.
J'ai aussi ajouté un compteur d'images pour t'éviter des désagréments...
Par défaut, je te conseille de le lancer dans un terminal qui RESTERA OUVERT !
Car dans le cas contraire, tu ne pourra plus changer ton fond d'écran normalement et tu risques de souffrir de latences de ton ordinateur. Ou alors tu peux aussi changer la variable '$fin'.
Bref j'espère t'avoir aidé !
Hors ligne
|
This is (mostly) easy to do, thanks to newforms admin. Basically, you'll need to create a custom inline subclass and override the template used to render it in the admin. Assuming you have an app called app and models Model1 and Model2, you'd do the following:
First, create your admin.py file:
from django.contrib import admin
from app.models import Model1, Model2
class Model2Admin(admin.ModelAdmin):
list_display = (...)
class Model2Inline(admin.TabularInline):
model = Model2
extra = 0
template = 'admin/app/model2/inline.html'
class Model1Admin(admin.ModelAdmin):
list_display = (...)
inlines = (Model2Inline,)
admin.site.register(Model1, Model1Admin)
admin.site.register(Model2, Model2Admin)
Then, create the inline.html template at admin/app/model2:
{% load i18n %}
<div class="inline-group">
<div class="tabular inline-related {% if forloop.last %}last-related{% endif %}">
{{ inline_admin_formset.formset.management_form }}
<fieldset class="module">
<h2>{{ inline_admin_formset.opts.verbose_name_plural|capfirst|escape }}</h2>
{{ inline_admin_formset.formset.non_form_errors }}
<table>
<thead>
<tr>
<th colspan="2">Field1</th>
<th>Field2</th>
<th>Field3</th>
</tr>
</thead>
{% for inline_admin_form in inline_admin_formset %}
<tr class="{% cycle row1,row2 %}">
<td class="original">
<!-- Render all form fields as hidden fields: -->
{{ inline_admin_form.pk_field.field }}
{% spaceless %}
{% for fieldset in inline_admin_form %}
{% for line in fieldset %}
{% for field in line %}
{{ field.field.as_hidden }}
{% endfor %}
{% endfor %}
{% endfor %}
{% endspaceless %}
</td>
<!-- then display just the values of the fields you're interested in: -->
<td class="field1">
<!-- Make this a link to the change detail page for this object: -->
<a href="{% url admin:app_model2_change inline_admin_form.original.pk %}">{{ inline_admin_form.original.field1 }}</a>
</td>
<td class="field2">
{{ inline_admin_form.original.field2 }}
</td>
<td class="field3">
{{ inline_admin_form.original.field3 }}
</td>
</tr>
{% endfor %}
</table>
</fieldset>
</div>
</div>
Next, add your app to INSTALLED_APPS in settings.py -- don't forget to add django.contrib.admin too :).
Finally, edit your root urls.py to include the following lines:
from django.conf.urls.defaults import *
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
...
(r'^admin/', include(admin.site.urls))
)
That should do it. Note that admin.site.urls, which makes the url reversal possible, will only work post django 1.1.
|
rezzakilla
Re : [Info] Installation du driver Libre ATI Radeon
Je dis ça comme ça...mais ça marche terrible sur ma 7500.....:D
Hors ligne
hugo69
Re : [Info] Installation du driver Libre ATI Radeon
ton tuto est dans la doc officielle mais ca naide pas beaucoup ma 9700ATI Hercules à fonctionner correctement.
Si je mets les drivers propritaire, les soft de tele ne se lance plus et certains plugins visualisations sur xxms sont capable de me planter lordi en full screen
si je met les radeon ca semble etre stable mais je gagne rien en qualite, genre les screensaver et autres.
donc avec ma 9700 payé 3500f il ya kelkes temps deja , c vrai, je peux profiter des performance dune mx2 en gros.
ca et grip ki tourne à 3 à lheure sont les gros defaut ke je trouve a ubuntu pour linstannt.
Mais sinon pour le reste, rien a redire.
en tout cas bravo pour le tuto
Hors ligne
Stemp
Re : [Info] Installation du driver Libre ATI Radeon
Merci
Par contre il est vrai que ce tutoriel est plutôt destiné au cartes ati jusqu'à la 9200.
Concernant ta carte, tu as essayé les nouveaux drivers sur le site ati.com ?
Hors ligne
hugo69
Re : [Info] Installation du driver Libre ATI Radeon
non jai pas essayé, je suis pas un pro de la compilation ou de la conversion .rpm en .deb meme si cest pas forcement un obstacle dans le deuxieme cas. par contre niveau compatibilité, jen sais rien.
Jai vu ca, jai dl les X.Org 6.8 puisque je sui ssous x11et non pas xfree
https://support.ati.com/ics/support/def … derID=3959
en fait jy connais rien et jai peur de me retrouver sous console au redemarrage et detre incapable de resoudre et de me retaper linstallation de la totale.
jai ouvert ca pour pas polluer: http://forum.ubuntu-fr.org/viewtopic.ph … 904#p45904
deja je sais ke cette methode est valadble jusqua 9200 au moin je peu deja leliminer et ke la solution serai plus vers les drivers ati
merci
Hors ligne
NicoA380
Re : [Info] Installation du driver Libre ATI Radeon
Attention à ne pas tomber dans le panneau comme moi.
J'ai installé les driver proprio d' ATI fglrx en croyant que ça allait changer quelque chose sur ma Radeon 8500. J'y ai cru un moment, mais j'avais une appli OpenGL qui ne voulait plus démarrer, il fait appel à une fonction buggué d'OpenGL.
J'ai cherché pendant une journée à retourner sur le driver libre ATI, en retirant fglrx dans Synaptic, et en restaurant mon premier xorg.conf . Résultat, la cata, toutes les applis 3D sacadaient à mourrir, et glxgears fournissait un score en dessous de 100 fps.
J'ai fini par relire la procédure d'installation de fglrx, et là ... mais oui !!! le fichier /etc/modules !! il fallait rajouter fglrx en premier à l'installation, mais il faut évidement la retirer pour repasser en driver libre !
Un reboot plus tard, je retrouve mes applis 3D toutes fluides, et un score autour de 1500 fps à glxgear, à peu près le même que celui avec le driver fglrx.
Tout ce blabla pour dire : oubliez pas de retirer fglrx du fichier /etc/modules quand vous repassez au driver libre 'ati'.
Dernière modification par NicoA380 (Le 25/06/2005, à 09:54)
Hors ligne
Aito
Re : [Info] Installation du driver Libre ATI Radeon
Hello la foule,
Heu juste pour être sur d'avoir bien compris :
J'ai une Radeon X600 Pro, donc d'apres ta liste le driver 'radeon' ne la supporte qu'en 2D (je suis hyperrrrrrrrrrr content....)
Et fglrx alors il supporte quelles cartes ?
A+
Hors ligne
Stemp
Re : [Info] Installation du driver Libre ATI Radeon
Salut,
Le driver propirètaire fglrx supporte la 3d sur les cartes à partir de la série 8500.
Donc c'est bon pour les X600, mais il est préférable d'utiliser l'installateur de drivers du site ati (8.14.13)
Hors ligne
Francois_Gregoire
Re : [Info] Installation du driver Libre ATI Radeon
ok ca marche, merci pour l'astuce...
j'ai du modifier quelques valeurs pour que ca tourne, mais c'est bon..
pour info j'ai une radeon Mobility M6 LY..
Qu'as du changer car moi m'as mobility M6 ne marche pas en direct rendering.
Voici ce que glxinfo me dit:
francois@ubuntu:~$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: No
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_EXT_import_context, GLX_OML_swap_method, GLX_SGI_make_current_read,
GLX_SGIS_multisample, GLX_SGIX_fbconfig
client glx vendor string: SGI
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_allocate_memory,
GLX_MESA_swap_control, GLX_MESA_swap_frame_usage, GLX_OML_swap_method,
GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control,
GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method,
GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_visual_select_group
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.2 (1.5 Mesa 6.2.1)
OpenGL extensions:
GL_ARB_depth_texture, GL_ARB_imaging, GL_ARB_multitexture,
GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow,
GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp,
GL_ARB_texture_cube_map, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_func_separate, GL_EXT_blend_logic_op,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_clip_volume_hint,
GL_EXT_copy_texture, GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_polygon_offset, GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_shadow_funcs,
GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_subtexture,
GL_EXT_texture, GL_EXT_texture3D, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_lod_bias, GL_EXT_texture_object,
GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_APPLE_packed_pixels,
GL_ATI_texture_env_combine3, GL_ATI_texture_mirror_once,
GL_ATIX_texture_env_combine3, GL_IBM_texture_mirrored_repeat,
GL_INGR_blend_func_separate, GL_MESA_pack_invert, GL_MESA_ycbcr_texture,
GL_NV_blend_square, GL_NV_point_sprite, GL_NV_texgen_reflection,
GL_NV_texture_rectangle, GL_SGIS_generate_mipmap,
GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp,
GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow,
GL_SGIX_shadow_ambient, GL_SUN_multi_draw_arrays
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess
visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat
----------------------------------------------------------------------
0x23 24 tc 0 24 0 r y . 8 8 8 0 0 16 0 0 0 0 0 0 0 None
0x24 24 tc 0 24 0 r y . 8 8 8 0 0 16 8 16 16 16 0 0 0 None
0x25 24 tc 0 32 0 r y . 8 8 8 8 0 16 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r . . 8 8 8 8 0 16 8 16 16 16 16 0 0 None
0x27 24 dc 0 24 0 r y . 8 8 8 0 0 16 0 0 0 0 0 0 0 None
0x28 24 dc 0 24 0 r y . 8 8 8 0 0 16 8 16 16 16 0 0 0 None
0x29 24 dc 0 32 0 r y . 8 8 8 8 0 16 8 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r . . 8 8 8 8 0 16 8 16 16 16 16 0 0 None
Donc on vas bien que le direct renderin est a no. Mais si je regarde mon /etc/X11/xorg.conf:
# /etc/X11/xorg.conf (xorg X Window System server configuration file)
#
# This file was generated by dexconf, the Debian X Configuration tool, using
# values from the debconf database.
#
# Edit this file with caution, and see the /etc/X11/xorg.conf manual page.
# (Type "man /etc/X11/xorg.conf" at the shell prompt.)
#
# This file is automatically updated on xserver-xorg package upgrades *only*
# if it has not been modified since the last upgrade of the xserver-xorg
# package.
#
# If you have edited this file but would like it to be automatically updated
# again, run the following commands:
#
# cp /etc/X11/xorg.conf /etc/X11/xorg.conf.custom
# sudo sh -c 'md5sum /etc/X11/xorg.conf >/var/lib/xfree86/xorg.conf.md5sum'
# sudo dpkg-reconfigure xserver-xorg
Section "Files"
FontPath "unix/:7100" # local font server
# if the local font server has problems, we can fall back on these
FontPath "/usr/lib/X11/fonts/misc"
FontPath "/usr/lib/X11/fonts/cyrillic"
FontPath "/usr/lib/X11/fonts/100dpi/:unscaled"
FontPath "/usr/lib/X11/fonts/75dpi/:unscaled"
FontPath "/usr/lib/X11/fonts/Type1"
FontPath "/usr/lib/X11/fonts/CID"
FontPath "/usr/lib/X11/fonts/100dpi"
FontPath "/usr/lib/X11/fonts/75dpi"
# paths to defoma fonts
FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType"
FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/CID"
EndSection
Section "Module"
Load "bitmap"
Load "dbe"
Load "ddc"
Load "dri"
Load "extmod"
Load "freetype"
Load "glx"
Load "int10"
Load "record"
Load "type1"
Load "vbe"
EndSection
Section "Extensions"
Option "RENDER" "Enable"
EndSection
Section "InputDevice"
Identifier "Generic Keyboard"
Driver "keyboard"
Option "CoreKeyboard"
Option "XkbRules" "xorg"
Option "XkbModel" "pc104"
Option "XkbLayout" "us"
EndSection
Section "InputDevice"
Identifier "Configured Mouse"
Driver "mouse"
Option "CorePointer"
Option "Device" "/dev/input/mice"
Option "Protocol" "ImPS/2"
Option "Emulate3Buttons" "true"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
Identifier "Synaptics Touchpad"
Driver "synaptics"
Option "SendCoreEvents" "true"
Option "Device" "/dev/psaux"
Option "Protocol" "auto-dev"
Option "HorizScrollDelta" "0"
Option "MaxTapTime" "0"
EndSection
Section "Device"
Identifier "ATI Technologies, Inc. Radeon Mobility 9000 (M6 LY)"
ChipID 0x4c59
Driver "radeon"
Option "AGPMode" "4"
Option "AGPSize" "64" # default: 8
Option "RingSize" "8"
Option "BufferSize" "2"
Option "EnablePageFlip" "True"
Option "EnableDepthMoves" "True"
Option "RenderAccel" "true"
BusID "PCI:1:0:0"
EndSection
Section "Monitor"
Identifier "Generic Monitor"
Option "DPMS"
EndSection
Section "Screen"
Identifier "Default Screen"
Device "ATI Technologies, Inc. Radeon Mobility 9000 (M6 LY)"
Monitor "Generic Monitor"
DefaultDepth 24
SubSection "Display"
Depth 1
Modes "1400x1050"
EndSubSection
SubSection "Display"
Depth 4
Modes "1400x1050"
EndSubSection
SubSection "Display"
Depth 8
Modes "1400x1050"
EndSubSection
SubSection "Display"
Depth 15
Modes "1400x1050"
EndSubSection
SubSection "Display"
Depth 16
Modes "1400x1050"
EndSubSection
SubSection "Display"
Depth 24
Modes "1400x1050"
EndSubSection
EndSection
Section "ServerLayout"
Identifier "Default Layout"
Screen "Default Screen"
InputDevice "Generic Keyboard"
InputDevice "Configured Mouse"
InputDevice "Synaptics Touchpad"
EndSection
Section "DRI"
Mode 0666
EndSection
Tous est selon la procedure.
Je me demande bien ce que je fais de travers...
Quelqu'un a une suggestion? car disons que 250 a glxgears c'est pas fameux avec un laptop P3 1.2gig avec 512megs de ram....
Francois
Francois_Gregoire
Re : [Info] Installation du driver Libre ATI Radeon
Tu n'aurais pas installé les drivers fglrx et oublié de les enlever par hasard ?
Nope.
Je n'ai pas installer les fglrx et j'ai meme verifier dans symantics pour etre sur. Et non ni moi, ni l'installation de base n'avons installe les drivers proprietaire de ATi.
Francois_Gregoire
Re : [Info] Installation du driver Libre ATI Radeon
Bon, bon, il faudrait voir le fichier /var/log/Xorg.0.log
Essaie de trouver l'erreur, ça aidera surement
Selon Xorg.0.log, il semble que 1400x1050 24bits sois trop pour le maigre 16megs que m'as carte video possede... Reste a trouver ou je peux essayer de mettre 16bits voir. Je vais essayer de le mettre par defaut dans xorg.conf a 16bits voir si ca marche.
Hors ligne
Stemp
Re : [Info] Installation du driver Libre ATI Radeon
Nope pas le bon message d'erreur
Le fait d'utiliser les modes 1400x1050 ou/et 24 bits, ne gêne en rien le fait d'utiliser l'accélération graphique de la radeon.
OpenGL vendor string: Mesa project: www.mesa3d.orgOpenGL renderer string: Mesa GLX IndirectOpenGL version string: 1.2 (1.5 Mesa 6.2.1
Là est ton problème. Recherche "render" dans ta log
Hors ligne
Francois_Gregoire
Re : [Info] Installation du driver Libre ATI Radeon
Nope pas le bon message d'erreur
Le fait d'utiliser les modes 1400x1050 ou/et 24 bits, ne gêne en rien le fait d'utiliser l'accélération graphique de la radeon.
Ca marcher. Le probleme avec la M6 sur un portable Dell qui contient seulement 16megs ne peux marcher en 1400x1050 24bits. Trop d'information selon le log de Xorg.0.log que cette configuration demande 17.5megs. Donc en changant le mode 24bits en 16bits dans /etc/X11/xorg.conf de default 24bits en 16bits, ca marche en 1400x1050. Pourquoi garder le 1400x1050 au loei de la descendre? Tres simple, c<est la resolution native d'un inspiron 4100 ou latitude C610 ou dans mon cas un inspiron 4100 hacker en latitude C610 .
maintenant au lieu d'avoir 200fps dans glxgears j'ai 800fps. C'est loins des 1500-2000 que j'ai vu sur le forum mais bon, m'as resolution est quand meme plus haute que 800x600
Hors ligne
Stemp
Re : [Info] Installation du driver Libre ATI Radeon
Et glxinfo ne te donnes plus :
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.2 (1.5 Mesa 6.2.1)
?????
Je ne comprends pas pourquoi ta résolution modifie le "Rendering" !!
Mais bon, très bonne info
Hors ligne
Francois_Gregoire
Re : [Info] Installation du driver Libre ATI Radeon
Et glxinfo ne te donnes plus :
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.2 (1.5 Mesa 6.2.1)
?????
Je ne comprends pas pourquoi ta résolution modifie le "Rendering" !!
Mais bon, très bonne info
Voici mon glxinfo apres le changement de 24bits a 16bits du au manque de memoire
francois@ubuntu:~$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_EXT_import_context, GLX_OML_swap_method, GLX_SGI_make_current_read,
GLX_SGIS_multisample, GLX_SGIX_fbconfig
client glx vendor string: SGI
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_allocate_memory,
GLX_MESA_swap_control, GLX_MESA_swap_frame_usage, GLX_OML_swap_method,
GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control,
GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_swap_control,
GLX_MESA_swap_frame_usage, GLX_OML_swap_method, GLX_SGI_video_sync,
GLX_SGIS_multisample, GLX_SGIX_fbconfig
OpenGL vendor string: Tungsten Graphics, Inc.
OpenGL renderer string: Mesa DRI Radeon 20040929 AGP 4x x86/MMX/SSE NO-TCL
OpenGL version string: 1.2 Mesa 6.2.1
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture,
GL_ARB_texture_border_clamp, GL_ARB_texture_compression,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_mirrored_repeat, GL_ARB_texture_rectangle,
GL_ARB_transpose_matrix, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_logic_op, GL_EXT_blend_minmax,
GL_EXT_blend_subtract, GL_EXT_clip_volume_hint,
GL_EXT_compiled_vertex_array, GL_EXT_convolution, GL_EXT_copy_texture,
GL_EXT_draw_range_elements, GL_EXT_histogram, GL_EXT_packed_pixels,
GL_EXT_polygon_offset, GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_subtexture, GL_EXT_texture,
GL_EXT_texture3D, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod_bias,
GL_EXT_texture_mirror_clamp, GL_EXT_texture_object,
GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_APPLE_packed_pixels,
GL_ATI_texture_env_combine3, GL_ATI_texture_mirror_once,
GL_IBM_rasterpos_clip, GL_IBM_texture_mirrored_repeat,
GL_MESA_ycbcr_texture, GL_MESA_window_pos, GL_NV_blend_square,
GL_NV_light_max_exponent, GL_NV_texture_rectangle,
GL_NV_texgen_reflection, GL_SGI_color_matrix, GL_SGI_color_table,
GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp,
GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess
visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat
----------------------------------------------------------------------
0x23 16 tc 0 16 0 r . . 5 6 5 0 0 16 0 0 0 0 0 0 0 None
0x24 16 tc 0 16 0 r . . 5 6 5 0 0 16 8 0 0 0 0 0 0 Slow
0x25 16 tc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 0 0 0 Slow
0x26 16 tc 0 16 0 r . . 5 6 5 0 0 16 8 16 16 16 0 0 0 Slow
0x27 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 0 0 0 0 0 0 None
0x28 16 tc 0 16 0 r y . 5 6 5 0 0 16 8 0 0 0 0 0 0 Slow
0x29 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 0 0 0 Slow
0x2a 16 tc 0 16 0 r y . 5 6 5 0 0 16 8 16 16 16 0 0 0 Slow
0x2b 16 dc 0 16 0 r . . 5 6 5 0 0 16 0 0 0 0 0 0 0 None
0x2c 16 dc 0 16 0 r . . 5 6 5 0 0 16 8 0 0 0 0 0 0 Slow
0x2d 16 dc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 0 0 0 Slow
0x2e 16 dc 0 16 0 r . . 5 6 5 0 0 16 8 16 16 16 0 0 0 Slow
0x2f 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 0 0 0 0 0 0 None
0x30 16 dc 0 16 0 r y . 5 6 5 0 0 16 8 0 0 0 0 0 0 Slow
0x31 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 0 0 0 Slow
0x32 16 dc 0 16 0 r y . 5 6 5 0 0 16 8 16 16 16 0 0 0 Slow
Dernière modification par Francois_Gregoire (Le 03/07/2005, à 01:53)
Hors ligne
Aito
Re : [Info] Installation du driver Libre ATI Radeon
Hello la foule
Pour Stemp :
Tu dis :
Le driver propirètaire fglrx supporte la 3d sur les cartes à partir de la série 8500.
Donc c'est bon pour les X600, mais il est préférable d'utiliser l'installateur de drivers du site ati (8.14.13)
Ha ??
Merci pour l'info, décidément je ne comprend rien à l'organisation de la gamme ATI
Sinon à propos du driver proprio
J'ai suivi tes conseil je l'ai téléchargé et installé.
Tout se passe relativement bien :
install automatique : ras
fglrxconfig : je répond du mieux que je peux en prennant souvent les choix proposés
et redémmare X, et la à glxgears je suis passé d'environ 500 à environ 150 !!! (sans compter les problème de clavier et autres)
J'ai donc mis les main dans le xorg.conf pour en fair eun mixte entre celui d'origine et celui proposé par ATI
J4ai fini avec qq chose qui fonctionne sans pb mais toujours avec un glxgears à 150 !!
Je fais quoi docteur ? j'abat la bete ?
Personne n'a une radeon X600 Pro dans le coin ?
Merci pour l'aide
A+
Aito
Hors ligne
Stemp
Re : [Info] Installation du driver Libre ATI Radeon
Vérifie que tu as bien un fichier fglrx.ko dans le répertoire /etc/modules/fglrx/ et que tu n'as pas de ce fichier dans le répertoire /etc/modules/2.610....(ta version de kernel)
Hors ligne
Aito
Re : [Info] Installation du driver Libre ATI Radeon
heu...
c'est pas plutot dans /lib/modules ?
dans /lib.modules :
j'ai bien un fglrx.ko qui est un lien vers fglrx.2.6.10-5-686.ko qui est dans le meme répertoire
puis 2 autres (des fichiers pas des liens) dans :
/lib/modules/2.6.10-5-686/kernel/drivers/video$
et
/lib/modules/2.6.10-5-686/kernel/drivers/char/drm$
Hors ligne
Stemp
Re : [Info] Installation du driver Libre ATI Radeon
Ooops.....c'était pour voir si tu suivais bien
Et pas de répertoire /lib/modules/fglrx ???
Normalement l'installateur ATI crée ce répertoire pour y compiler le nouveau fichier fglrx.ko
Hors ligne
Aito
Re : [Info] Installation du driver Libre ATI Radeon
si si pardon c bien dans /lib/modules/fglrx
pas dans /lib/modules tout court
je fais quoi maintenant ??:rolleyes:
Hors ligne
insan
Re : [Info] Installation du driver Libre ATI Radeon
Bonjour à tous !
J'ai une carte Radeon 9200 Pro (RV280) mais j'ai un problème de résolution car pendant le démarrage à un moment donnée j'attends longtemps pour que ça poursuive avec cette ligne :
*ror : Temporary Failure in name resolution [Fail]
Donc je me suis dit peut-être que cette procédure décrite ici résoudrait ce problème avec les avantages qu'elle apporte ... mais j'ai une question :
Concernant les drivers propriétaires, les deux paquets importants à virer sont : xorg-driver-fglrx et fglrx-control.
mais moi, j'ai dans Synaptic :
- dans "installés" : xorg-common et xorg-driver-synaptics.
- Et dans "non installés" : fglrx-control et fglrx-kernel-source.
Dois-je alors me contenter de désinstaller les 2 paquets dans la rubrique "installés" et puis, évidemment, faire les modifications indiquées dans le premier post ?
Autre question : C'est quoi glxgears ?
Merci infiniment par avance !
Hors ligne
|
I'd like to display argparse help for my options the same way the default -h,--help and -v,--version are, without the ALLCAPS text after the option, or at least without the duplicated CAPS.
import argparse
p = argparse.ArgumentParser("a foo bar dustup")
p.add_argument('-i', '--ini', help="use alternate ini file")
print '\n', p.parse_args()
This is what I currently get with python foobar.py -h:
usage: a foo bar dustup [-h] [-i INI]optional arguments: -h, --help show this help message and exit -i INI, --ini INI use alternate ini
And this is what I want:
usage: a foo bar dustup [-h] [-i INI]optional arguments: -h, --help show this help message and exit -i, --ini INI use alternate ini
This would be acceptable too:
-i, --ini use alternate ini
I'm using python 2.7.
|
If myVariable is a string that comes from an external source (like a database), you first need to find out what kind of string it is.
Since you seem to be using python2, there are two main possibilities: myVariable is either a unicode string object, or a bytes string object. A unicode string is one that has already been decoded to text characters. A bytes string is one that has already been encoded (using an encoding like 'utf-8' or 'latin-1').
It appears from the example code in your question that myVariable is a bytes string object.
The reason you get the first UnicodeDecodeError is because you are trying to re-encode a byte string. To do this, python would first have to decode myVariable to a unicode string object before it could apply the new encoding. By default, python assumes an "ascii" encoding when automatically decoding in this way - but since myVariable contains bytes beyond the ascii range (0-128), an error occurs.
The same situation occurs when you try to pass myVariable to the unicode function. Unless an explicit encoding is given, python will again assume "ascii", and you will see the same UnicodeDecodeError.
Now, when it comes to writing myVariable to a file, the solution is very simple if it is a bytes string object: do nothing! Just write myVariable directly to the file:
f = open(path, 'wb')
f.write(myVariable)
f.close()
However, when you read the file back, you will need to know the original encoding of myVariable in order to decode it to unicode:
f = open(path)
myVariable = f.read().decode('utf-8')
f.close()
And now if you modify myVariable and want to write it back out to file again, you have to remember that this time it is a unicode string, and so you need to encode it first:
f = open(path, 'wb')
f.write(myVariable.encode('utf-8'))
f.close()
|
I've been looking for a way to update my Twitter status from a Python client. As this client only needs to access one Twitter account, it should be possible to do this with a pre-generated oauth_token and secret, according to http://dev.twitter.com/pages/oauth_single_token
However the sample code does not seem to work, I'm getting 'could not authenticate you' or 'incorrect signature'..
As there are a bunch of different python-twitter library out there (and not all of them are up-to-date) I'd really appreciate if anybody could point me a library that's currently working for POST requests, or post some sample code!
Update:I've tried Pavel's solution, and it works as long as the new message is only one word long, but as soon as it contains spaces, i get this error:
status = api.PostUpdate('hello world')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\site-packages\python_twitter\twitter.py", line 2459, in PostUpdate
self._CheckForTwitterError(data)
File "C:\Python26\lib\site-packages\python_twitter\twitter.py", line 3394, in _CheckForTwitterErro
r
raise TwitterError(data['error'])
python_twitter.twitter.TwitterError: Incorrect signature
If however the update is just one word, it works:
status = api.PostUpdate('helloworld')
{'status': 'helloworld'}
Any idea why this might be happening?
Thanks a lot in advance,
Hoff
|
ETags, or entity-tags, are an important part of HTTP, being a critical part of caching, and also used in "conditional" requests. So what is an etag?
That's not very helpful, is it?
The easiest way to think of an etag is as an MD5 or SHA1 hash of all the bytes in a representation. If just one byte in the representation changes, the etag will change.
Aside: I am only talking about strong etags here. There are such things as weak etags, they only indicate two representations are semantically equivalent. Semantically equivalent? From here on out when I say 'etag', I mean a strong etag.
ETags are returned in a response to a GET:
joe@joe-laptop:~$ curl --include http://bitworking.org/news/
HTTP/1.1 200 Ok
Date: Wed, 21 Mar 2007 15:06:15 GMT
Server: Apache
etag: "078de59b16c27119c670e63fa53e5b51"
Content-Length: 23081
Vary: Accept-Encoding,User-Agent
Connection: close
Content-Type: application/xhtml+xml; charset=utf-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta content="text/html; charset=utf-8" http-equiv="content-type" /><link href="/favicon.ico" type="image/ico" rel="shortcut icon" />
...
On a subsequent GET request you can put the value in that ETag: header in an If-None-Match: header and if there is a representation that has that etag, i.e. if the representation hasn't changed, then the response is a 304 with no entity body returned.
That's a great savings in bandwidth.
The inclusion of an If-* header turns any normal request into a "conditional" request, in this case our GET became a "conditional" GET.
The etag is used as a cache-validator and can be combined with other cache related headers to great effect. See my article on XML.com: Doing HTTP Caching Right: Introducing httplib2.
In addition to being used during GETs, the etag can be used to do a "conditional" PUT, which can be used to avoid the Lost Update Problem.
The Apache httpd web server has built in support for generating etags for statically served files. FileETag allows you to set what pieces of information are used to generate an etag. You can choose a combination of inode, last-modified, and the file size.
Why not turn them all on? Well, in cases where you are serving the same file from several servers you definitely want to turn off the use of the 'inode' for generating the etag since the inode will vary from system to system.
If you are not serving up static content then you need to do some more work to enable etags. How much work you do will determine how much benefit you get from etags. The deeper the concept of an entity-tag permeates your application, the more benefit you will receive.
Aside: Many of the things I'm talking about with ETags and If-* headers can also be done with a last modified time served in the Last-Modified: header. In general I advise against using Last-Modified: since it is limited to a one second granularity and you may have issues with clock skew among a group of servers. ETags are just conceptually simpler and just as powerful. This advice is only really for servers, which can decide which cache-validators to support, clients have no such luck and should support both.
How do you generate an etag? Find all the bits of information that could impact your representation of a resource and use that information to build an 'opaque' etag. I usually do that by concatenating the values of these key pieces of information as strings and then calculating an MD5 or SHA1 hash of that string. The MD5 hashed value is certainly opaque, and the MD5 hash assures that the actual etag is only 32 characters long, while ensuring that they are highly unlikely to collide.
You can get away with a very shallow implementation of etags and get a lot of benefits to your bandwidth. You could implement a simple layer in your stack that actually built the full response and then calculated an MD5 hash of the bytes returned and use that as an etag. From that simple base you could handle "conditional" GETs and achieve a savings in bandwidth. This isn't to be sneezed at, as the savings could be substantial.
On the other hand, if you bring the concept of etags deeper into you application you could get even more benfits. First, you could support things like "conditional" PUTs, which allows clients to detect lost updates. [For the terminology-oriented this is a form of optimistic concurreny.]
Secondly, the data query and templating needed to create a representation may be the time-consuming part of the response and the bandwidth savings may be negligable in comparison. In this case it's beneficial to bury etag support deep in your application and use it to shortcut the querying and templating steps.
REST Tip: Deep etags give you more benefits.
For this to work you need to pick out key values or characteristics of your data the will determine if a representation will change, and then build an etag from that. For example, in the case of files, Apache httpd uses a combination of inode, last-modified time, and the file size. For your application you may already store a timestamp of when each resource is modified, which is perfect information to fold into an etag.
In the case of data stored in a database, if a resource is tied to a single row in a table then a simple timestamp or revision number on the row is a good source of information for generating an etag. But that is just one source. If you then process that through a template then the 'version' of the template also needs to used in calculating your etag. A change to the template would alter the representation even though the revision number for a row in the database didn't change, so both need to be used together when calculating the etag.
Here are some examples of deep etags that avoid a lot of computation.
Here is the bit of code in the sparklines web service that checks for matching etags:
if_none_match = os.environ.get('HTTP_IF_NONE_MATCH', '')
if if_none_match and str(hash(os.environ.get('QUERY_STRING', '') + __version__)) == if_none_match:
not_modified()
The whole source file is available. In this case the etag is driven off the query parameters passed into the service and the version of the file spark.cgi itself.
I took a slightly different approach in the Critter Generator and instead of using the file version I used the last-modified timestamp of the program.
def etag(critterid):
file_version = os.stat(sys.argv[0]).st_mtime
etag = sha.sha(critterid)
etag.update(str(file_version))
return '"%s"' % etag.hexdigest()
In both of these services the etag check is done very early and avoids all of the calculations required for a non-matching response.
You have a good knowledge of your domain and can come up with a method of determining an etag from your data. Maybe the data is never updated, or you keep track of updates already, or your database keeps fine grained timestamps on rows that you can use for etag generation.
You should use those.
What I'm going to show you is a sledge hammer approach that doesn't rely on specialized knowledge of your problem domain. Like all sledgehammers, it's a heavy tool that should be applied with care.
If your resource maps one to one with a row in a table, and you keep a revision number for each row then you can use that as a value to build an etag.
This technique doens't require adding any code to update the revision number on the rows, that can be done by using a trigger. Here is an example from SQLite:
CREATE TABLE notes (
id INTEGER PRIMARY KEY autoincrement,
note TEXT,
rev INTEGER DEFAULT 0);
CREATE TRIGGER insert_notes_revision AFTER UPDATE ON notes
BEGIN
UPDATE notes SET
rev = rev+1
WHERE id = new.id;
END;
Let's look at some Python code that handles this, a trivial Python application for editing 'notes'. Just editing. You can't even add or delete notes, just edit them. All of the code for this sample is available here. This service is built on my throw away Python framework Robaccia. Here are the modifications to robaccia.py. Note that render() depends upon the caller passing in some information, raw_etag, to be used as a basis for an etag. It then adds in a dependency on the last-modified timestamp of the template file. It returns a 304 if appropriate, otherwise it includes the calculated entity tag in the ETag: header.
def render(environ, start_response, template_file, vars, headers={}, status="200 Ok", raw_etag=None):
file=os.path.join("templates", template_file)
if raw_etag:
last_modified = str(os.stat(file).st_mtime)
hash = md5.new(raw_etag)
hash.update(last_modified)
etag = '"%s"' % hash.hexdigest()
headers['etag'] = etag
if etag == environ.get('HTTP_IF_NONE_MATCH', ''):
start_response('304 Not Modified', [])
return []
(contenttype, serialization) = ('text/html; charset=utf-8', 'html')
ext = template_file.rsplit(".")
if len(ext) > 1 and (ext[1] in extensions):
(contenttype, serialization) = extensions[ext[1]]
# Only serve XHTML to those clients that can understand it.
if serialization in matching:
best = mimeparse.best_match(matching.keys(), environ.get('HTTP_ACCEPT', 'application/xhtml+xml'))
(contenttype, serialization) = (best, match[best])
if serialization == 'xhtml' and environ.get('HTTP_USER_AGENT', '').find("MSIE") >= 0:
(contenttype, serialization) = extensions['html']
template = kid.Template(file, **vars)
body = template.serialize(output=serialization, encoding='utf-8')
headers['Content-Type'] = contenttype
start_response(status, list(headers.iteritems()))
return [body]
And here is the view implementation, based on wsgicollection. The _raw_etag() method is what builds up the raw information to be used in the calculation of the etag. In this case it is just a concatenation of all the 'rev' columns in the rows used to generate the response.
import robaccia
import dbconfig
from wsgicollection import Collection
from config import log
class Notes(Collection):
def _raw_etag(self, cursor):
e = []
for row in iter(cursor):
e.append("%d-%d" % (row['id'], row['rev']))
return "-".join(e)
def list(self, environ, start_response):
c = dbconfig.connection.cursor()
rows = list(c.execute("select id, note, rev from notes;"))
return robaccia.render(environ, start_response, 'list.xhtml', {'rows': rows}, raw_etag=self._raw_etag(rows))
def get_edit_form(self, environ, start_response):
c = dbconfig.connection.cursor()
id = environ['wsgiorg.routing_args'][1]['id']
rows = list(c.execute("select id, note, rev from notes where id = ? ;", id))
return robaccia.render(environ, start_response, 'edit_form.xhtml', {'rows': rows}, raw_etag=self._raw_etag(rows))
def update(self, environ, start_response):
c = dbconfig.connection.cursor()
id = environ['wsgiorg.routing_args'][1]['id']
f = environ['formpostdata']
note = f.get('note', ['no note found'])[0]
rev = f.get('rev', ['no rev found'])[0]
c.execute('update notes set note=:note where id=:id;', locals())
dbconfig.connection.commit()
start_response("303 See Other", [('Location', "../")])
return []
You'll note that this implementation requires looking at all the rows that will be used to generate the response, so this technique isn't going to save you any computation time; it will only save bandwidth, and the processing time for the templates.
I told you it was a sledgehammer.
One more thing to note, look at the implementation of _raw_etag(), it concatenates the 'id' and 'rev' for each row used to build the representation. If this list ran to hundreds of items and we didn't form the etag from an MD5 hash of raw_etag, then we'd end up schlepping around an etag hundreds of bytes long, which is no way to save bandwidth.
Again, the point isn't to show you exactly how you should be implementing etags, but to give you some ideas on how to start, and how you can use them to speed up your application. The deeper you build etags into your application, and the earlier to start thinking about them, the better off you'll be.
2007-03-22
|
SESTAY
gvfx sur 12.4
bonjour:
cela m'arrive d'utiliser gvfx pour réaliser quelques transitions de vidéos mais après migration gvfx ne veux plus se lancer.
voici le message en console
from PyQt4 import QtCore, QtGui
ImportError: No module named PyQt4
j'ai biens trouvé ces deux "librairie" après recherche
/usr/include/qt4/QtCore/QtCore
/usr/include/qt4/QtGui/QtGui
Hors ligne
inbox
Re : gvfx sur 12.4
Salut,
En me basant sur ce sujet et vu que tu as migré ton système de 11.10 vers 12.04 (?), il y a deux possibilités :
1/ python n'est pas correctement ou complètement installé;
2/ tu as deux versions de python installées et Gvfx ne sait lequel utiliser.
Est-ce que les paquets python3-pyqt4 et python-qt4 sont installés ?
Donne donc le résultat des commandes suivantes :
whereis pythonwhich pythonecho $PATHecho $PYTHONPATH
A+
Dernière modification par inbox (Le 11/07/2012, à 15:34)
Hors ligne
SESTAY
Re : gvfx sur 12.4
bonsoir et merci de ton aide.
Non j'ai du mal m'exprimer j'étais bien sur 11.10 mais j'ai réinstallé le système complètement.
voici les résultats des commandeswhereis python
python: /usr/bin/python /usr/bin/python2.7 /usr/bin/python3.2 /usr/bin/python3.2mu /etc/python /etc/python2.7 /etc/python3.2 /usr/lib/python2.7 /usr/lib/python3.2 /usr/bin/X11/python /usr/bin/X11/python2.7 /usr/bin/X11/python3.2 /usr/bin/X11/python3.2mu /usr/local/lib/python2.7 /usr/local/lib/python3.2 /usr/include/python2.7_d /usr/include/python2.7 /usr/include/python3.2mu /usr/share/python /usr/share/man/man1/python.1.gz
which python
/usr/bin/python
echo $PATH
/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
echo $PYTHONPATH
rien
un petit test en lançant python j'ai ceci quand je fais help modules
help> modules
Please wait a moment while I gather a list of all available modules...
/usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed
import gobject._gobject
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed
from gtk import _gtk
** (python:2370): CRITICAL **: pyg_register_boxed: assertion `boxed_type != 0' failed
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: cannot register existing type `GdkDevice'
from gtk import _gtk
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: g_type_get_qdata: assertion `node != NULL' failed
from gtk import _gtk
Erreur de segmentation (core dumped)
voilà ci ceci peut inspirer
Hors ligne
inbox
Re : gvfx sur 12.4
Je viens d'essayer de l'installer sur Precise Pangolin et je me retrouve avec cette erreur :
gvfx
Traceback (most recent call last):
File "/usr/share/gvfx/gvfxgui.py", line 26, in <module>
import gnome.ui
ImportError: No module named gnome.ui
Comme "gnome.ui" est bien présent, je pense que Gvfx ne va pas le chercher au bon endroit.
Au fait, j'ai testé la version "gvfx_0.1.5-1_all.deb". Quelle version as-tu installé ?
Hors ligne
SESTAY
Re : gvfx sur 12.4
bonjour
version gvfx_0.1.6.tar.gz juste décompressé et lancée a partir d'un dossier dans home cela fonctionnait impec sur ubuntu 11.10, je suis reté sur celle-ci car la version 1.7 ne fonctionnait pas bien.
avec gvfx_0.1.6.
au départ j'ai été obligé de modifier dans le lanceur gvfx modifier le chemin de l'ui gvfxgui.py et mettre le chemin complet de celle-ci
sinon j'ai
python: can't open file 'gvfxgui.py': [Errno 2] No such file or directory
par contre je suis allé faire un tour dans synaptic
j'ai python 2.7 (>=2.7.3)
et quand je fais propriétés>Dépendances , dans la liste j'ai:
_________________________________________
En confit avec python-central(<0.5.5)
Casse:python-bz2 (2 lignes
Casse:python-csv (2 lignes)
Casse:python-email (2 lignes)
Casse:update-manager-core(<0.200.5.2) (2 lignes)
Remplace:python-dev(<2.6.5-2)(2 lignes)
En confit avec python
_____________________________________
et sur python-central:
entre autre
en conflit avec debhelper (<5.0.37.3ubuntu2)
Dernière modification par SESTAY (Le 12/07/2012, à 11:58)
Hors ligne
inbox
Re : gvfx sur 12.4
Avec gvfx_0.1.6 extrait dans le dossier Documents, j'ai ensuite fait :
sudo mkdir /usr/share/gvfx
sudo mv ~/Documents/gvfx-0.1.6/* /usr/share/gvfx/
Puis j'ai lancé Gvfx avec la commande :
python /usr/share/gvfx/gvfxgui.py
Ça fonctionne.
Hors ligne
SESTAY
Re : gvfx sur 12.4
même problème
pour voir j'ai installé le deb de la version 1.5
quand je le lance j'ai une boite qui s'ouvre:You need to install python bindings for libvte
après verif j'ai bien un libvte-2.90-9 d'installé
Hors ligne
SESTAY
Re : gvfx sur 12.4
bonjour:
après mises à jour du systèmes ceci fonctionne quelque explications suivrons sous peux parc-ce que quelques réglages dans les scripts s'imposent.
Hors ligne
SESTAY
Re : gvfx sur 12.4
suite ....
voilà ce que j'ai fait
1 j'ai mis la version gvfx0.1.7 dans un répertoire du dossier home
2 j'ai décompressé la version de blender idoine (2.57) pour que tout ce petit monde fonctionne correctement.
3 j'ai modifié le script gvfxgui.py (lignes 527 cmd= "Vers les bon chemin des fichier à charger")
et c'est reparti.
reste quand même quelques problèmes à résoudre je vais donc ouvrir un fil dans la partie développement.
Hors ligne
|
There are a few automatic memoization libraries available on the internet for various different languages; but without knowing what they are for, where to use them, and how they work, it can be difficult to see their value. What are some convincing arguments for using memoization, and what problem domain does memoization especially shine in? Information for the uninformed would be especially appreciated here.
The popular factorial answer here is something of a toy answer. Yes, memoization is useful for repeated invocations of that function, but the relationship is trivial â in the "print factorial(N) for 0..M" case you're simply reusing the last value.
Many of the other examples here are just 'caching'. Which is useful, but it ignores the awesome algorithmic implications that the word memoization carries for me.
Far more interesting are cases where different branches of single invocation of a recursive function hits identical sub-problems but in a non-trivial pattern such that actually indexing into some cache is actually useful.
For example, consider n dimensional arrays of integers whos absolute values sum to k. E.g. for n=3,k=5 [1,-4,0], [3,-1,1], [5,0,0], [0,5,0] would be some examples. Let V(n,k) be the number of possible unique arrays for a given n,k. Its definition is:
This function gives 102 for n=3,k=5.
Without memoization this quickly becomes very slow to compute for even fairly modest numbers. If you visualize the processing as a tree, each node an invocation of V() expanding to three children you'd have 186,268,135,991,213,676,920,832 V(n,0)=1 leaves in the computation of V(32,32)... Implemented naively this function rapidly becomes uncomputable on available hardware.
But many of the child branches in the tree are exact duplicates of each other though not in some trivial way that could easily be eliminated like the factorial function. With memoization we can merge all those duplicate branches. In fact, with memoization V(32,32) only executes V() 1024 (n*m) times which is a speedup of a factor of 10^21 (which gets larger as n,k grows, obviously) or so in exchange for a fairly small amount of memory. :) I find this kind of fundamental change to the complexity of an algorithm far more exciting than simple caching. It can make intractable problems easy.
Because python numbers are naturally bignums you can implement this formula in python with memoization using a dictionary and tuple keys in only 9 lines. Give it a shot and try it without the memoization.
In my opinion, Fibonacci and factorial calculations are not really the best examples. Memoisation really comes into into its own when you have:
Obviously if you
...even better than #2 is if
Note that a lot of this might be probabilistic (or intuitive) â sure, someone might try all of the 10^13 possible inputs to your magic calculation, but you know that realistically they won't. If they do, the overhead of memoisation will actually be of no benefit to them. But you may well decide that this is acceptable, or allow bypassing the memoisation in such circumstances.
Here's an example, and I hope it's not too convoluted (or generalised) to be informative.
In some firmware I've written, one part of the program takes a read from an ADC, which could be any number from
Creating a lookup table ahead of time is ridiculous. The input domain is the Cartesian product of [
But no user requires or expects the device to work well when conditions change rapidly, and they'd
Given the definition of "slowly changing conditions" that the typical user expects, that ADC value is going settle to a particular value and remain within about 0x010 of its settled value. Which value depends on the conditions.
The result of the calculation can therefore be memoised for these 16 potential inputs. If environmental conditions
The drawback here is that if environmental conditions change a lot, that already-slow calculation runs a little slower. We've already established that this is an unusual use-case, but if someone later reveals that actually, they
Memoization is technique to store the answers to subproblems, so that a program does not need to re-solve the same sub-problems later.
It is an often an important technique in solving problems using Dynamic Programming.
Imagine enumerating all paths from the top-left corner of a grid to the bottom-right corner of a grid. A lot of the paths overlap each other. You can memoize the solutions calculated for each point on the grid, building from the bottom-right, back up to the top-left. This takes the computing time down from "ridiculous" to "tractable".
Another use is: List the factorials of the number 0 to 100. You do not want to calculate 100! using
For a data point, for my grid solving problem above (the problem is from a programming challenge):
Memoization shines in problems where solutions to subproblems can be reused. Speaking simply, it is a form of caching. Let's look at the factorial function as an example.
3! is a problem on it's own, but it's also a subproblem for n! where n > 3 such as
Any problem where subproblem solutions can be reused (the more frequently the better) is a candidate for using memoization.
Memoization can turn exponential time (or worse) into linear time (or better) when applied to problems that are multiple-recursive in nature. The cost is generally O(n) space.
The classic example is computing the Fibonacci sequence. The textbook definition is the recurrence relation:
Implemented naively, it looks like this:
int fib(int n) {
if (n == 0) {
return 0;
}
else if (n == 1) {
return 1;
}
else {
return fib(n-1) + fib(n-2);
}
}
You can see that the runtime grows exponentially with n because each of the partial sums is computed multiple times.
Implemented with memoization, it looks like this (clumsy but functional):
int fib(int n) {
static bool initialized = false;
static std::vector<int> memo;
if (!initialized) {
memo.push_back(0);
memo.push_back(1);
initialized = true;
}
if (memo.size() > n) {
return memo[n];
}
else {
const int val = fib(n-1) + fib(n-2);
memo.push_back(val);
return val;
}
}
Timing these two implementations on my laptop, for n = 42, the naive version takes 6.5 seconds. The memoized version takes 0.005 seconds (all system time--that is, it's I/O bound). For n = 50, the memoized version still takes 0.005 seconds, and the naive version finally finished after 5 minutes & 7 seconds (never mind that both of them overflowed a 32-bit integer).
Memoization can radically speed up algorithms. The classic example is the Fibonocci series, where the recursive algorithm is insanely slow, but memoization automatically makes it as fast as the iterative version.
One of the uses for a form of memoization is in game tree analysis. In the analysis of non-trivial game trees (think chess, go, bridge) calculating the value of a position is a non-trivial task and can take significant time. A naive implementation will simply use this result and then discard it but all strong players will store it and use it should the situation arise again. You can imagine that in chess there are countless ways of reaching the same position.
To achieve this in practise requires endless experimentation and tuning but it is safe to say that computer chess programs would not be what they are today without this technique.
In AI the use of such memoization is usually referred to as a 'transposition table'.
Memoization is essentially caching the return value of a function for a given input. This is useful if you're going to repeat a function call many times with the same input, and especially so if the function takes some time to execute. Of course, since the data has to be stored somewhere, memoization is going to use more memory. It's a tradeoff between using CPU and using RAM.
I use memoization all the time when migrating data from one system to another (ETL). The concept is that if a function will always return the same output for the same set of inputs, it may make sense to cache the result - especially if it takes awhile to calculate that result. When doing ETL, you're often repeating the same actions lots of times on lots of data, and performance is often critical. When performance isn't a concern or is negligible, it probably doesn't make sense to memoize your methods. Like anything, use the right tool for the job.
I think mostly everybody has covered the basics of memoization, but I'll give you some practical examples where moization can be used to do some pretty
Of course there are
As an example of how to use memoization to boost an algorithm's performance, the following runs roughly
class Slice:
__slots__ = 'prefix', 'root', 'suffix'
def __init__(self, prefix, root, suffix):
self.prefix = prefix
self.root = root
self.suffix = suffix
################################################################################
class Match:
__slots__ = 'a', 'b', 'prefix', 'suffix', 'value'
def __init__(self, a, b, prefix, suffix, value):
self.a = a
self.b = b
self.prefix = prefix
self.suffix = suffix
self.value = value
################################################################################
class Tree:
__slots__ = 'nodes', 'index', 'value'
def __init__(self, nodes, index, value):
self.nodes = nodes
self.index = index
self.value = value
################################################################################
def old_search(a, b):
# Initialize startup variables.
nodes, index = [], []
a_size, b_size = len(a), len(b)
# Begin to slice the sequences.
for size in range(min(a_size, b_size), 0, -1):
for a_addr in range(a_size - size + 1):
# Slice "a" at address and end.
a_term = a_addr + size
a_root = a[a_addr:a_term]
for b_addr in range(b_size - size + 1):
# Slice "b" at address and end.
b_term = b_addr + size
b_root = b[b_addr:b_term]
# Find out if slices are equal.
if a_root == b_root:
# Create prefix tree to search.
a_pref, b_pref = a[:a_addr], b[:b_addr]
p_tree = old_search(a_pref, b_pref)
# Create suffix tree to search.
a_suff, b_suff = a[a_term:], b[b_term:]
s_tree = old_search(a_suff, b_suff)
# Make completed slice objects.
a_slic = Slice(a_pref, a_root, a_suff)
b_slic = Slice(b_pref, b_root, b_suff)
# Finish the match calculation.
value = size + p_tree.value + s_tree.value
match = Match(a_slic, b_slic, p_tree, s_tree, value)
# Append results to tree lists.
nodes.append(match)
index.append(value)
# Return largest matches found.
if nodes:
return Tree(nodes, index, max(index))
# Give caller null tree object.
return Tree(nodes, index, 0)
################################################################################
def search(memo, a, b):
# Initialize startup variables.
nodes, index = [], []
a_size, b_size = len(a), len(b)
# Begin to slice the sequences.
for size in range(min(a_size, b_size), 0, -1):
for a_addr in range(a_size - size + 1):
# Slice "a" at address and end.
a_term = a_addr + size
a_root = a[a_addr:a_term]
for b_addr in range(b_size - size + 1):
# Slice "b" at address and end.
b_term = b_addr + size
b_root = b[b_addr:b_term]
# Find out if slices are equal.
if a_root == b_root:
# Create prefix tree to search.
key = a_pref, b_pref = a[:a_addr], b[:b_addr]
if key not in memo:
memo[key] = search(memo, a_pref, b_pref)
p_tree = memo[key]
# Create suffix tree to search.
key = a_suff, b_suff = a[a_term:], b[b_term:]
if key not in memo:
memo[key] = search(memo, a_suff, b_suff)
s_tree = memo[key]
# Make completed slice objects.
a_slic = Slice(a_pref, a_root, a_suff)
b_slic = Slice(b_pref, b_root, b_suff)
# Finish the match calculation.
value = size + p_tree.value + s_tree.value
match = Match(a_slic, b_slic, p_tree, s_tree, value)
# Append results to tree lists.
nodes.append(match)
index.append(value)
# Return largest matches found.
if nodes:
return Tree(nodes, index, max(index))
# Give caller null tree object.
return Tree(nodes, index, 0)
################################################################################
import time
a = tuple(range(50))
b = (48, 11, 5, 22, 28, 31, 14, 18, 7, 29, 49, 44, 47, 36, 25, 27,
34, 10, 38, 15, 21, 16, 35, 20, 45, 2, 37, 33, 6, 30, 0, 8, 13,
43, 32, 1, 40, 26, 24, 42, 39, 9, 12, 17, 46, 4, 23, 3, 19, 41)
start = time.clock()
old_search(a, b)
stop = time.clock()
print('old_search() =', stop - start)
start = time.clock()
search({}, a, b)
stop = time.clock()
print('search() =', stop - start)
Memoization is just a fancy word for caching. If you calculations are more expensive than pulling the information from the cache then it is a good thing. The problem is that CPUs are fast and memory is slow. So I have found that using mmoization is usually much slower than just redoing the calculation.
Of course there are other techniques available that really do give you significant improvement. If I know that I need f(10) for every iteration of a loop, then I will store that in a variable. Since there is no cache look-up, this is usually a win.
Go ahead and down vote me all you want. That won't change the fact that you need to do real benchmarking and not just blindly start throwing everything in hash tables.
If you know your range of values at compile time, say because you are using n! and n is a 32-bit int, then you will do better to use a static array.
If your range of values is large, say any double, then your hash table can grow so large that it becomes a serious problem.
If the same result is used over and over again in conjunction with a given object, then it may make sense to store that value with the object.
In my case I discovered that over 90% of the time the inputs for any given iteration was the same as the last iteration. That means I just needed to keep the last input and last result and only recalc if the input changed. This was an order of magnitude faster than using memoization for that alogrithm.
|
#2201 Le 02/06/2010, à 21:39
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@zarvox moi aussi il ne reconnait pas ma ville mais tu peux prendre une ville environnante tu auras quasiment les mêmes informations
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2202 Le 02/06/2010, à 21:48
leben24
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
Bonsoir, ca fait un petit moment que j'ai pas touché à mon conky, et je me demande s'il n'est toujours pas possible d'avoir les icônes du bureau sur le conky? Ou plutôt de rendre le conky "transparent" pour les voir derrière, car il me semble que, en gros, les icônes ne sont pas sur le même plan que le conky.
Merci
Hors ligne
#2203 Le 02/06/2010, à 21:48
Zarvox
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@zarvox moi aussi il ne reconnait pas ma ville mais tu peux prendre une ville environnante tu auras quasiment les mêmes informations
Le soucis c'est que des villes en belgique y en a pas des masses
Du coup je suis obligé d'utiliser un script qui va chercher les infos sur le site accuweather.com où là ma ville est répertoriée.
Sauf que ce script est moins joli que conkyforecast et que les statistiques fournies par le site ne sont que pour deux jours
A la maison : PC Core 2 Duo 3.3 Ghz - Ubuntu 10.4 et MacBook Pro Core Duo 1,83 Ghz dual boot Mac OS X - Ubuntu 10.4.
Au boulot : PC "noname" - Ubuntu 10.4
Hors ligne
#2204 Le 02/06/2010, à 21:57
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@soza971
Pour le calendrier, j'avais ce script:<metadata lang=Shell prob=0.54 />
${goto 155}${font Monaco:size=18}${color #0000ff}${execi 1800 date +%^B | cut -c1}${font Monaco:size=10}${color #0F4C5D}${execi 1800 date +%B | cut -c2-}
${goto 105}${color #0F4C5D}${font Monaco:size=9}${execpi 60 DJS=`date +%_d`; cal | sed '/./!d' | sed '1d' | sed 's/$/ /' | fold -w 21 | sed -n '/^.\{21\}/p' | sed 's/^/${goto 105} /' | sed /" $DJS "/s/" $DJS "/" "'${color #FF000C}'"$DJS"'${color #0000ff}'" "/}
Il faudra peut-être que tu joue avec les "goto"
Maintenant, j'utilise un calendrier en lua, qui a l'avantage de ne pas avoir à se soucier si la police est à chasse fixe ou non.
J'ai testé plusieurs variable avec le goto mais sa n'a rien donné les deux lignes restent au mm endroit
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2205 Le 02/06/2010, à 21:59
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
soza971 a écrit :
@zarvox moi aussi il ne reconnait pas ma ville mais tu peux prendre une ville environnante tu auras quasiment les mêmes informations
Le soucis c'est que des villes en belgique y en a pas des masses
Du coup je suis obligé d'utiliser un script qui va chercher les infos sur le site accuweather.com où là ma ville est répertoriée.
Sauf que ce script est moins joli que conkyforecast et que les statistiques fournies par le site ne sont que pour deux jours
je comprends!
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2206 Le 02/06/2010, à 22:47
wlourf
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@wlourf je viens de tester il écrit disque non monté même quand le disque est branché , je viens de voir que le disque externe n était pas monté dans le fichier stab le problème viendrait de la?
je viens d'essayer avec une clé USB ou un disque de fstab, ça fonctionne chez moi avec conky 1.8.0, donc je peux pas t'en dire plus !
@leben24
avec conky 1.8.0 et
own_window_argb_visual yesown_window_transparent yesown_window_type desktop
tu auras la vraie transparence, sauf si tu as compiz, tu auras des réglages en + à faire mais je connais pas.
Dernière modification par wlourf (Le 02/06/2010, à 22:50)
Hors ligne
#2207 Le 03/06/2010, à 01:54
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
soza971 a écrit :
@wlourf je viens de tester il écrit disque non monté même quand le disque est branché , je viens de voir que le disque externe n était pas monté dans le fichier stab le problème viendrait de la?
je viens d'essayer avec une clé USB ou un disque de fstab, ça fonctionne chez moi avec conky 1.8.0, donc je peux pas t'en dire plus !
@leben24
avec conky 1.8.0 etown_window_argb_visual yes own_window_transparent yes own_window_type desktop
tu auras la vraie transparence, sauf si tu as compiz, tu auras des réglages en + à faire mais je connais pas.
justement le disque dont je parles n est pas référencé dans fstab
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2208 Le 03/06/2010, à 12:12
wlourf
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
justement le disque dont je parles n est pas référencé dans fstab
oui moi non plus ma clé USB n'est pas référencée dans fstab et ça marche (c'est ce que j'ai écrit plus haut)
Hors ligne
#2209 Le 03/06/2010, à 13:05
Muy_Bien
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
C'est pas dans le mtab qu'il faut regarder ce genre de choses .. ?
Windows est un système d'exploitation de l'homme par l'ordinateur.
Linux, c'est le contraire ... [Brunod]
Hors ligne
#2210 Le 03/06/2010, à 13:38
Fenouille84
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
Il suffit d'appeler le script dans conky et de comparer la valeur avec une valeur seuil pour par exemple:
${font webdings: size=15}${if_match ${exec le_script_perso} >0}${color1}${else}${color2}${endif}=${font}
Tu le répètes autant de fois que nécessaire pour avoir une barre de progression en point (je crois)
avec le_script_perso:
#!/bin/bash
upd=`stat -c %Y news.xml` ## bien indiquer l'emplacement du news.xml
act=`date +%s`
diff=$(($act - $upd))
echo $diff
ah ok !! cool merci Levi59!!
Je testerai tout sa tout à l'heure !!
Dernière modification par Fenouille84 (Le 04/06/2010, à 14:07)
Hors ligne
#2211 Le 03/06/2010, à 15:09
chepioq
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
chepioq a écrit :
@soza971
Pour le calendrier, j'avais ce script:<metadata lang=Shell prob=0.54 />
${goto 155}${font Monaco:size=18}${color #0000ff}${execi 1800 date +%^B | cut -c1}${font Monaco:size=10}${color #0F4C5D}${execi 1800 date +%B | cut -c2-}
${goto 105}${color #0F4C5D}${font Monaco:size=9}${execpi 60 DJS=`date +%_d`; cal | sed '/./!d' | sed '1d' | sed 's/$/ /' | fold -w 21 | sed -n '/^.\{21\}/p' | sed 's/^/${goto 105} /' | sed /" $DJS "/s/" $DJS "/" "'${color #FF000C}'"$DJS"'${color #0000ff}'" "/}
Il faudra peut-être que tu joue avec les "goto"
Maintenant, j'utilise un calendrier en lua, qui a l'avantage de ne pas avoir à se soucier si la police est à chasse fixe ou non.
J'ai testé plusieurs variable avec le goto mais sa n'a rien donné les deux lignes restent au mm endroit
C'est vraiment bizarre...
C'est le conky calendrier que j'utilisai avant de passer à celui en lua.
Je viens de le re-tester, et chez moi aussi c'est décalé, alors que c'était bien aligné avant (mais avant quoi?...)
Tout est dans tout et réciproquement....
Hors ligne
#2212 Le 03/06/2010, à 15:22
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
soza971 a écrit :chepioq a écrit :
@soza971
Pour le calendrier, j'avais ce script:<metadata lang=Shell prob=0.54 />
${goto 155}${font Monaco:size=18}${color #0000ff}${execi 1800 date +%^B | cut -c1}${font Monaco:size=10}${color #0F4C5D}${execi 1800 date +%B | cut -c2-}
${goto 105}${color #0F4C5D}${font Monaco:size=9}${execpi 60 DJS=`date +%_d`; cal | sed '/./!d' | sed '1d' | sed 's/$/ /' | fold -w 21 | sed -n '/^.\{21\}/p' | sed 's/^/${goto 105} /' | sed /" $DJS "/s/" $DJS "/" "'${color #FF000C}'"$DJS"'${color #0000ff}'" "/}
Il faudra peut-être que tu joue avec les "goto"
Maintenant, j'utilise un calendrier en lua, qui a l'avantage de ne pas avoir à se soucier si la police est à chasse fixe ou non.
J'ai testé plusieurs variable avec le goto mais sa n'a rien donné les deux lignes restent au mm endroit
C'est vraiment bizarre...
C'est le conky calendrier que j'utilisai avant de passer à celui en lua.
Je viens de le re-tester, et chez moi aussi c'est décalé, alors que c'était bien aligné avant (mais avant quoi?...)
Pour moi il a toujours été décalé, j'ai remarqué au fil du forum que pour beaucoup c était aligné mais que je n étais pas le seul dans ce cas.
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2213 Le 03/06/2010, à 15:32
chepioq
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
Oui, mais le soucis c'est que chez c'était bien aligné, et que maintenant non...
La meilleure preuve est là, dans un post que j'avais mis sur fedora.fr:
http://forums.fedora-fr.org/viewtopic.php?pid=403940
On voit que tout est aligné, je viens de re-tester ce conky, et le calendrier est décalé... (j'ai même changé la date de mon ordi, à novembre 2009 et c'est pareil, c'est décalé).
Qu'est ce qui à bien pu changé?
Tout est dans tout et réciproquement....
Hors ligne
#2214 Le 03/06/2010, à 17:18
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
Oui, mais le soucis c'est que chez c'était bien aligné, et que maintenant non...
La meilleure preuve est là, dans un post que j'avais mis sur fedora.fr:
http://forums.fedora-fr.org/viewtopic.php?pid=403940
On voit que tout est aligné, je viens de re-tester ce conky, et le calendrier est décalé... (j'ai même changé la date de mon ordi, à novembre 2009 et c'est pareil, c'est décalé).
Qu'est ce qui à bien pu changé?
je ne sais pas du tout en espérant qu'un autre user aura la réponse
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2215 Le 03/06/2010, à 19:36
chepioq
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@soza971
Bon, j'ai trouvé ce qui clochai chez moi (et j'ai un peu honte de l'avouer...:rolleyes: )
Je venais d'installer Fedora 13 et j'ai complètement oublié d'installer la police Monaco (je sais c'est très bête...)
Je l'ai installé et maintenant le conky calendrier est bien aligné.
Donc vérifie que Monaco est bien installé chez toi (au moins pour ton user...)
Tout est dans tout et réciproquement....
Hors ligne
#2216 Le 03/06/2010, à 20:30
Fenouille84
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@ Levi59
C'est bon, ton script fonctionne bien ^^
Je paufinerai tout sa ce weekend et je viendrai poster le résultat.
Merci
Dernière modification par Fenouille84 (Le 03/06/2010, à 22:22)
Hors ligne
#2217 Le 03/06/2010, à 20:37
chepioq
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@ Chepioq
C'est bon, ton script fonctionne bien ^^
Je paufinerai tout sa ce weekend et je viendrai poster le résultat.
Merci
de quel script parles tu?
Tout est dans tout et réciproquement....
Hors ligne
#2218 Le 03/06/2010, à 21:53
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@soza971
Bon, j'ai trouvé ce qui clochai chez moi (et j'ai un peu honte de l'avouer...:rolleyes: )
Je venais d'installer Fedora 13 et j'ai complètement oublié d'installer la police Monaco (je sais c'est très bête...)
Je l'ai installé et maintenant le conky calendrier est bien aligné.
Donc vérifie que Monaco est bien installé chez toi (au moins pour ton user...)
Merci a toi c était bien un problème de fonts je l'ai installé et tout est aligné la
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2219 Le 03/06/2010, à 22:21
Fenouille84
Hors ligne
#2220 Le 04/06/2010, à 09:23
chepioq
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
chepioq a écrit :
de quel script parles tu?
Oups !! C'était le script de Levi59 (j'ai corrigé mon post précédent)
C'est bien, faut rendre à Levi59 ce qui lui appartient...
Merci pour lui.
Tout est dans tout et réciproquement....
Hors ligne
#2221 Le 04/06/2010, à 14:26
Levi59
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
Fenouille84 a écrit :chepioq a écrit :
de quel script parles tu?
Oups !! C'était le script de Levi59 (j'ai corrigé mon post précédent)
C'est bien, faut rendre à Levi59 ce qui lui appartient...
Merci pour lui.
Levi content!
PS: je ne suis pas sûr mais il faut peut être rajouter un "exit 0" à la fin du script...
Je ne me rappelle plus si c'est nécessaire.
Hors ligne
#2222 Le 05/06/2010, à 12:00
Fenouille84
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
Voici mon bureau, dédié au film Tron Legacy [pub] qui sortira le 15 Décembre au ciné [/pub]
Screen :
Uploaded with ImageShack.us
conkyrc1 (barre du haut)
#Fonctionnement de conky
total_run_times 0 #Temps en secondes ; 0 = toujours actif
background yes #Pour que conky tourne en arrière plan ; no = pour les tests
#Réglages système
cpu_avg_samples 1 #Nb d'échantillons pour calculer la moyenne d'utilisation CPU
net_avg_samples 2 #Nb d'échantillons pour calculer la moyenne d'utilisation CPU
#Mémoire
double_buffer yes #Éviter le clignotement
no_buffers yes #Soustraire les mémoires tampons de la mémoire utilisée
text_buffer_size 1024 #Taille du cache pour le texte
#Affichage
out_to_console no #Affiche le texte sur la sortie standard
update_interval 1 #Taux de rafraîchissement de la fenêtre (s)
#Fenêtre conky
alignment top_left #Alignement
#---
minimum_size 1380 10 #Taille minimum (px) ; largeur / hauteur
maximum_width 1380 #Largeur maximum (px)
#---
gap_x 30 #Écart avec le bord gauche / droit
gap_y 3 #Écart avec le bord haut / bas
#---
draw_shades no #Afficher les ombres
draw_outline no #Afficher les contours de fenêtre
draw_borders no #Afficher des contours autour des blocs de texte
border_width 1 #Largeur du contour
border_inner_margin 1 #Largeur des marges
#---
own_window yes #Utiliser sa propre fenêtre
own_window_type override #Type de fenêtre ; normal / override / desktop
own_window_transparent yes #Pseudo transparence
#Mise en forme
use_xft yes #Utiliser Xft (polices lissées etc)
xftalpha .1 #Utiliser Xft
override_utf8_locale yes #Force l'UTF8
uppercase no #Tout le texte en majuscule
use_spacer left #Ajoute des espaces après certains objets (qu'avec des polices fixes)
#---
xftfont saxMono:size=9 #Police par défaut
#---
stippled_borders 5 #Taille des pointillés
#Couleurs
default_color FFFFFF #Couleur par défaut
default_shade_color 333333 #Couleur des ombres
default_outline_color black #Couleur des contours
#---
color1 738599 #Dark Blue
color2 5590bb #Light Blue
#---
short_units yes #Unités courtes
pad_percents 2 #Unité à 2 décimales
TEXT
${color1}Uptime:${uptime}${goto 152}Core1:${color2}${cpu cpu1}% ${color1}-${color2}${platform coretemp.0 temp 1}°C${color1}${goto 294}RAM :${memperc}%${goto 380}Qlty:${if_existing /proc/net/route wlan0}${offset -21}${color2}${wireless_link_qual_perc wlan0}%${color1}${else}--${endif}${goto 480}Send:${totalup wlan0}${goto 580}/root:${offset -7}${color2}${hddtemp /dev/sda}° ${color1}${fs_bar 5,50 /} ${fs_free /}${goto 772}/Lexar:${if_mounted /media/LEXAR}${fs_used_perc /media/LEXAR}% ${fs_bar 5,50 /media/LEXAR} ${fs_free /media/LEXAR}${else} -- unplugged --${endif}${goto 971}/LaCie:${if_mounted /media/LaCie_PC}${fs_used_perc /media/LaCie_PC}% ${fs_bar 5,50 /media/LaCie_PC} ${fs_free /media/LaCie_PC}${else} -- unplugged --${endif}${goto 1170}Vers:${exec cat /etc/lsb-release | grep -i "rele" | cut -d "=" -f2}${goto 1277}Volume:${color2}${exec amixer get PCM | sed '$!d' | cut -d "[" -f2 | cut -d "]" -f1}
${color1}Kernel:${exec uname -r | cut -c 1-9}${goto 152}Core2:${color2}${cpu cpu2}% ${color1}-${color2}${platform coretemp.1 temp 1}°C${color1}${goto 294}Swap:${swapperc}%${goto 380}Type:${if_existing /proc/net/route wlan0}${gw_iface}${else}--${endif}${goto 480}Down:${totaldown wlan0}${goto 580}/home:${fs_used_perc /home}% ${fs_bar 5,50 /home} ${fs_free /home}${goto 772}/Linux:${if_mounted /media/Linux}${fs_used_perc /media/Linux}% ${fs_bar 5,50 /media/Linux} ${fs_free /media/Linux}${else} -- unplugged --${endif}${goto 971}/ZMath:${if_mounted /media/ZMATH}${fs_used_perc /media/ZMATH}% ${fs_bar 5,50 /media/ZMATH} ${fs_free /media/ZMATH}${else} -- unplugged --${endif}${goto 1170}Code:${exec cat /etc/lsb-release | grep -i "code" | cut -d "=" -f2 | sed 's/^.\| [a-z]/\U&/g'}${goto 1277}Bureau:${desktop_name}
conkyrc2 (calendrier & cie)
#Fonctionnement de conky
total_run_times 0 #Temps en secondes ; 0 = toujours actif
background yes #Pour que conky tourne en arrière plan ; no = pour les tests
#Réglages système
cpu_avg_samples 1 #Nb d'échantillons pour calculer la moyenne d'utilisation CPU
net_avg_samples 2 #Nb d'échantillons pour calculer la moyenne d'utilisation CPU
#Mémoire
double_buffer yes #Éviter le clignotement
no_buffers yes #Soustraire les mémoires tampons de la mémoire utilisée
text_buffer_size 1024 #Taille du cache pour le texte
#Affichage
out_to_console no #Affiche le texte sur la sortie standard
update_interval 1 #Taux de rafraîchissement de la fenêtre (s)
#Fenêtre conky
alignment top_left #Alignement
#---
minimum_size 815 10 #Taille minimum (px) ; largeur / hauteur
maximum_width 815 #Largeur maximum (px)
#---
gap_x 577 #Écart avec le bord gauche / droit
gap_y 40 #Écart avec le bord haut / bas
#---
draw_shades no #Afficher les ombres
draw_outline no #Afficher les contours de fenêtre
draw_borders no #Afficher des contours autour des blocs de texte
border_width 1 #Largeur du contour
border_inner_margin 1 #Largeur des marges
#---
own_window yes #Utiliser sa propre fenêtre
own_window_type override #Type de fenêtre ; normal / override / desktop
own_window_transparent yes #Pseudo transparence
#Mise en forme
use_xft yes #Utiliser Xft (polices lissées etc)
xftalpha .1 #Utiliser Xft
override_utf8_locale yes #Force l'UTF8
uppercase no #Tout le texte en majuscule
use_spacer no #Ajoute des espaces après qq objets (qu'avec des polices fixes)
#---
xftfont saxMono:size=9 #Police par défaut
#---
stippled_borders 5 #Taille des pointillés
#Couleurs
default_color FFFFFF #Couleur par défaut
default_shade_color 333333 #Couleur des ombres
default_outline_color black #Couleur des contours
#---
color1 738599 #Dark Blue
color2 5590bb #Light Blue
#---
short_units yes #Unités courtes
pad_percents 2 #Unité à 2 décimales
TEXT
${color1}${font saxMono:size=26}${exec date +%d}${font}${voffset -14}${exec date +'%m/%y'}
${offset 42}${exec date +'%H:%M'}
${voffset -26}${execp cal | sed -e "1d ; s/^/\${offset 120} /g ; s/$/ /g" | sed 's/'" $(date +%e) "'/${color2}'" $(date +%e) "'${color1}/1'}
${voffset -91}${offset 320}ToDoList${hr}
${execp cat $HOME/Ubuntu/ToDo/ToDo | sed -e "s/^/\${offset 320} /g ; /#/d"}
${offset 320}Planning${hr}
${execp cat $HOME/Ubuntu/ToDo/Cal | sed -e "s/^/\${offset 320} /g ; /#/d"}
conky flux_rss (absent du screen) => il apparait dans une fenêtre grâce à un raccourci clavier.
#Fonctionnement de conky
total_run_times 0 #temps en secondes ; 0 = toujours actif
background yes #Pour que conky tourne en arrière plan ; no = pour les tests
#Réglages système
cpu_avg_samples 1 #Nb d'échantillons pour calculer la moyenne d'utilisation CPU
net_avg_samples 2 #Nb d'échantillons pour calculer la moyenne d'utilisation CPU
#Mémoire
double_buffer yes #Éviter le clignotement
no_buffers yes #Soustraire les mémoires tampons...
text_buffer_size 1024 #...de la mémoire utilisée
#Affichage
out_to_console no #Affiche le texte sur la sortie standard
update_interval 1 #Taux de rafraîchissement de la fenêtre (s)
#Fenêtre conky
alignment top_left #Alignement
#---
minimum_size 400 10 #Taille minimum (px) ; largeur / hauteur
maximum_width 400 #Largeur maximum (px)
#---
gap_x 30 #Écart avec le bord gauche / droit
gap_y 50 #Écart avec le bord haut / bas
#---
draw_shades no #Afficher les ombres
draw_outline no #Afficher les contours de fenêtre
draw_borders no #Afficher des contours autour des blocs de texte
border_width 1 #Largeur du contour
border_inner_margin 1 #Largeur des marges
#---
own_window yes #Utiliser sa propre fenêtre
own_window_type normal #Type de fenêtre ; normal / override / desktop
own_window_transparent no #Pseudo transparence
#Mise en forme
use_xft yes #Utiliser Xft (polices lissées etc)
xftalpha .1 #Utiliser Xft
override_utf8_locale yes #Force l'UTF8
uppercase no #Tout le texte en majuscule
use_spacer no #Ajoute des espaces après certains objets (qu'avec des polices fixes)
#---
xftfont saxMono:size=9 #Police par défaut
#---
stippled_borders 5 #Taille des pointillés
#Couleurs
default_color FFFFFF #Couleur par défaut
default_shade_color 333333 #Couleur des ombres
default_outline_color black #Couleur des contours
#---
color1 738599 #Dark Blue
color2 5590bb #Light Blue
color3 000000 #Black
#---
TEXT
${color1}${font dearJoe 5 CASUAL trial:italic:pixelsize=22}BBC News ${offset -3}${voffset 5}${hr}${font}
${execp cat $HOME/Flux/BBC_News/news_conky | sed 's/^/ /g'}
${color1}${font dearJoe 5 CASUAL trial:italic:pixelsize=22}BuzzWord ${offset -3}${voffset 5}${hr}${font}
${execp cat $HOME/Flux/MacMillan/rss_conky | fold -sw 55 | sed 's/^/ /g'}
${color1}${font dearJoe 5 CASUAL trial:italic:pixelsize=22}Vie De Merde ${offset -3}${voffset 5}${hr}${font}
${execpi 60 $HOME/conky/Script/VDM/vdm.sh | fold -sw 55 | sed 's/^/ /g'}
${alignc}${color2}Secondes restantes avant MAJ
${alignc}${if_match ${exec $HOME/Script/CCD} >60}${color3}${else}${color1}${endif}10 ${if_match ${exec $HOME/Script/CCD} >50}${color3}${else}${color1}${endif}20 ${if_match ${exec $HOME/Script/CCD} >40}${color3}${else}${color1}${endif}30 ${if_match ${exec $HOME/Script/CCD} >30}${color3}${else}${color1}${endif}40 ${if_match ${exec $HOME/Script/CCD} >20}${color3}${else}${color1}${endif}50 ${if_match ${exec $HOME/Script/CCD} >10}${color3}${else}${color1}${endif}60
Les scripts correspondants :
* bbc_news
#!/bin/bash
# Version 1.0
# Script pour récupérer le flux RSS du site BBC News - Words in the News
#Définit le bon dossier
DOSS="$HOME/Flux/BBC_News"
#Se déplacer dans le répertoire de sauvegarde (le créer s'il n'existe pas)
[ -d "$DOSS" ] || mkdir "$DOSS"
cd "$DOSS"
#Supprime l'ancien fichier
find . -type f -iname "*.xml*" -exec rm {} \;
#Vérification de la connexion
verif=$(wc -l < /proc/net/route)
if [ "$verif" = 0 ]
then
break
else
#Télécharge le flux
wget http://www.bbc.co.uk/worldservice/learningenglish/language/wordsinthenews/index.xml
#Ne sélectionne que les titre des articles
sed -i -n '/title/p' index.xml
#Mise en page et sauvegarde
cat index.xml | sed '/archive/d' | cut -d ">" -f2 | cut -d "<" -f1 | sed '1d' > $HOME/Flux/BBC_News/news_conky
#Ménage si trop de news
line=$(cat $HOME/Flux/BBC_News/news_conky | wc -l)
if [ "$line" -ge 11 ]
then sed -i '1,10 !d' $HOME/Flux/BBC_News/news_conky
fi
fi
exit 0
* macmillan
#!/bin/bash
# Version 1.0
# Script permettant d'avoir le flux RSS "BuzzWord" - Site MacMillan Dictionary
#Définition du répertoire de sauvegarde du flux
DOSS="$HOME/Flux/MacMillan"
#Se déplacer dans le répertoire de sauvegarde (le créer s'il n'existe pas)
[ -d "$DOSS" ] || mkdir "$DOSS"
cd "$DOSS"
#Nettoyage des fichiers inutiles (flux et sauvegardes > à 5 jours)
find . -type f -iname "*.xml*" -exec rm {} \;
#Vérification de la connexion
verif=$(wc -l < /proc/net/route)
if [ "$verif" = 0 ]
then
break
else
#Téléchargement du flux
wget http://www.macmillandictionary.com/buzzword/rss.xml
#Mise en page + enregistrement dans un fichier
sed -i -e "s/^ *//g ; 1,/entry/d ; /link/,/updated/d ; /entry/d ; /feed/d ; s/<title>/ /g ; s/<\/title>/ :/g ; s/<summary>//g ; s/<\/summary>/\./g ; s/.$//g" rss.xml
tr '\n' ' ' < rss.xml > rss_conky
sed -i -e "s/ /\n /g ; s/ //g ; s/\./\.\n/g" rss_conky
fi
exit 0
* vdm.sh
#!/bin/bash
#Version 1.0 ################################################################################################
#
# Fortunes aléatoires VDM
#
# Par Tite-Live ~ p.tite.live@gmail.com
# Il faut html2text.py pour le faire fonctionner
# http://www.aaronsw.com/2002/html2text/html2text.py [Auteur:Aaron Swartz (http://www.aaronsw.com/)]
#
# Adaptée par Fenouille84
#
#############################################################################################################
#Vérification de la connexion
verif=$(wc -l < /proc/net/route)
if [ "$verif" = 0 ]
then
break
else
VDM_Source1=$(wget 'http://api.viedemerde.fr/1.2/view/random?key=readonly' -O- -q)
VDM_Source2=$(wget 'http://api.viedemerde.fr/1.2/view/random?key=readonly' -O- -q)
VDM_Texte1=$(expr match "$VDM_Source1" '.*<texte>\(.*\)<\/texte>' | python $HOME/conky/Script/VDM/html2text.py | sed 's/"/"/g' | tr '\n' ' ')
VDM_Texte2=$(expr match "$VDM_Source2" '.*<texte>\(.*\)<\/texte>' | python $HOME/conky/Script/VDM/html2text.py | sed 's/"/"/g' | tr '\n' ' ')
echo -e "$VDM_Texte1\n\n$VDM_Texte2" > $HOME/Flux/VDM/VDM_conky
fi
cat $HOME/Flux/VDM/VDM_conky
exit 0
* html2text.py (à mettre avec le vdm.sh)
#!/usr/bin/env python
"""html2text: Turn HTML into equivalent Markdown-structured text."""
__version__ = "2.38"
__author__ = "Aaron Swartz (me@aaronsw.com)"
__copyright__ = "(C) 2004-2008 Aaron Swartz. GNU GPL 3."
__contributors__ = ["Martin 'Joey' Schulze", "Ricardo Reyes", "Kevin Jay North"]
# TODO:
# Support decoded entities with unifiable.
if not hasattr(__builtins__, 'True'): True, False = 1, 0
import re, sys, urllib, htmlentitydefs, codecs, StringIO, types
import sgmllib
import urlparse
sgmllib.charref = re.compile('&#([xX]?[0-9a-fA-F]+)[^0-9a-fA-F]')
try: from textwrap import wrap
except: pass
# Use Unicode characters instead of their ascii pseudo-replacements
UNICODE_SNOB = 0
# Put the links after each paragraph instead of at the end.
LINKS_EACH_PARAGRAPH = 0
# Wrap long lines at position. 0 for no wrapping. (Requires Python 2.3.)
BODY_WIDTH = 78
# Don't show internal links (href="#local-anchor") -- corresponding link targets won't be visible in the plain text file anyway.
SKIP_INTERNAL_LINKS = False
### Entity Nonsense ###
def name2cp(k):
if k == 'apos': return ord("'")
if hasattr(htmlentitydefs, "name2codepoint"): # requires Python 2.3
return htmlentitydefs.name2codepoint[k]
else:
k = htmlentitydefs.entitydefs[k]
if k.startswith("&#") and k.endswith(";"): return int(k[2:-1]) # not in latin-1
return ord(codecs.latin_1_decode(k)[0])
unifiable = {'rsquo':"'", 'lsquo':"'", 'rdquo':'"', 'ldquo':'"',
'copy':'(C)', 'mdash':'--', 'nbsp':' ', 'rarr':'->', 'larr':'<-', 'middot':'*',
'ndash':'-', 'oelig':'oe', 'aelig':'ae',
'agrave':'a', 'aacute':'a', 'acirc':'a', 'atilde':'a', 'auml':'a', 'aring':'a',
'egrave':'e', 'eacute':'e', 'ecirc':'e', 'euml':'e',
'igrave':'i', 'iacute':'i', 'icirc':'i', 'iuml':'i',
'ograve':'o', 'oacute':'o', 'ocirc':'o', 'otilde':'o', 'ouml':'o',
'ugrave':'u', 'uacute':'u', 'ucirc':'u', 'uuml':'u'}
unifiable_n = {}
for k in unifiable.keys():
unifiable_n[name2cp(k)] = unifiable[k]
def charref(name):
if name[0] in ['x','X']:
c = int(name[1:], 16)
else:
c = int(name)
if not UNICODE_SNOB and c in unifiable_n.keys():
return unifiable_n[c]
else:
return unichr(c)
def entityref(c):
if not UNICODE_SNOB and c in unifiable.keys():
return unifiable[c]
else:
try: name2cp(c)
except KeyError: return "&" + c
else: return unichr(name2cp(c))
def replaceEntities(s):
s = s.group(1)
if s[0] == "#":
return charref(s[1:])
else: return entityref(s)
r_unescape = re.compile(r"&(#?[xX]?(?:[0-9a-fA-F]+|\w{1,8}));")
def unescape(s):
return r_unescape.sub(replaceEntities, s)
def fixattrs(attrs):
# Fix bug in sgmllib.py
if not attrs: return attrs
newattrs = []
for attr in attrs:
newattrs.append((attr[0], unescape(attr[1])))
return newattrs
### End Entity Nonsense ###
def onlywhite(line):
"""Return true if the line does only consist of whitespace characters."""
for c in line:
if c is not ' ' and c is not ' ':
return c is ' '
return line
def optwrap(text):
"""Wrap all paragraphs in the provided text."""
if not BODY_WIDTH:
return text
assert wrap, "Requires Python 2.3."
result = ''
newlines = 0
for para in text.split("\n"):
if len(para) > 0:
if para[0] is not ' ' and para[0] is not '-' and para[0] is not '*':
for line in wrap(para, BODY_WIDTH):
result += line + "\n"
result += "\n"
newlines = 2
else:
if not onlywhite(para):
result += para + "\n"
newlines = 1
else:
if newlines < 2:
result += "\n"
newlines += 1
return result
def hn(tag):
if tag[0] == 'h' and len(tag) == 2:
try:
n = int(tag[1])
if n in range(1, 10): return n
except ValueError: return 0
class _html2text(sgmllib.SGMLParser):
def __init__(self, out=None, baseurl=''):
sgmllib.SGMLParser.__init__(self)
if out is None: self.out = self.outtextf
else: self.out = out
self.outtext = u''
self.quiet = 0
self.p_p = 0
self.outcount = 0
self.start = 1
self.space = 0
self.a = []
self.astack = []
self.acount = 0
self.list = []
self.blockquote = 0
self.pre = 0
self.startpre = 0
self.lastWasNL = 0
self.abbr_title = None # current abbreviation definition
self.abbr_data = None # last inner HTML (for abbr being defined)
self.abbr_list = {} # stack of abbreviations to write later
self.baseurl = baseurl
def outtextf(self, s):
self.outtext += s
def close(self):
sgmllib.SGMLParser.close(self)
self.pbr()
self.o('', 0, 'end')
return self.outtext
def handle_charref(self, c):
self.o(charref(c))
def handle_entityref(self, c):
self.o(entityref(c))
def unknown_starttag(self, tag, attrs):
self.handle_tag(tag, attrs, 1)
def unknown_endtag(self, tag):
self.handle_tag(tag, None, 0)
def previousIndex(self, attrs):
""" returns the index of certain set of attributes (of a link) in the
self.a list
If the set of attributes is not found, returns None
"""
if not attrs.has_key('href'): return None
i = -1
for a in self.a:
i += 1
match = 0
if a.has_key('href') and a['href'] == attrs['href']:
if a.has_key('title') or attrs.has_key('title'):
if (a.has_key('title') and attrs.has_key('title') and
a['title'] == attrs['title']):
match = True
else:
match = True
if match: return i
def handle_tag(self, tag, attrs, start):
attrs = fixattrs(attrs)
if hn(tag):
self.p()
if start: self.o(hn(tag)*"#" + ' ')
if tag in ['p', 'div']: self.p()
if tag == "br" and start: self.o(" \n")
if tag == "hr" and start:
self.p()
self.o("* * *")
self.p()
if tag in ["head", "style", 'script']:
if start: self.quiet += 1
else: self.quiet -= 1
if tag in ["body"]:
self.quiet = 0 # sites like 9rules.com never close <head>
if tag == "blockquote":
if start:
self.p(); self.o('> ', 0, 1); self.start = 1
self.blockquote += 1
else:
self.blockquote -= 1
self.p()
if tag in ['em', 'i', 'u']: self.o("_")
if tag in ['strong', 'b']: self.o("**")
if tag == "code" and not self.pre: self.o('`') #TODO: `` `this` ``
if tag == "abbr":
if start:
attrsD = {}
for (x, y) in attrs: attrsD[x] = y
attrs = attrsD
self.abbr_title = None
self.abbr_data = ''
if attrs.has_key('title'):
self.abbr_title = attrs['title']
else:
if self.abbr_title != None:
self.abbr_list[self.abbr_data] = self.abbr_title
self.abbr_title = None
self.abbr_data = ''
if tag == "a":
if start:
attrsD = {}
for (x, y) in attrs: attrsD[x] = y
attrs = attrsD
if attrs.has_key('href') and not (SKIP_INTERNAL_LINKS and attrs['href'].startswith('#')):
self.astack.append(attrs)
self.o("[")
else:
self.astack.append(None)
else:
if self.astack:
a = self.astack.pop()
if a:
i = self.previousIndex(a)
if i is not None:
a = self.a[i]
else:
self.acount += 1
a['count'] = self.acount
a['outcount'] = self.outcount
self.a.append(a)
self.o("][" + `a['count']` + "]")
if tag == "img" and start:
attrsD = {}
for (x, y) in attrs: attrsD[x] = y
attrs = attrsD
if attrs.has_key('src'):
attrs['href'] = attrs['src']
alt = attrs.get('alt', '')
i = self.previousIndex(attrs)
if i is not None:
attrs = self.a[i]
else:
self.acount += 1
attrs['count'] = self.acount
attrs['outcount'] = self.outcount
self.a.append(attrs)
self.o("![")
self.o(alt)
self.o("]["+`attrs['count']`+"]")
if tag == 'dl' and start: self.p()
if tag == 'dt' and not start: self.pbr()
if tag == 'dd' and start: self.o(' ')
if tag == 'dd' and not start: self.pbr()
if tag in ["ol", "ul"]:
if start:
self.list.append({'name':tag, 'num':0})
else:
if self.list: self.list.pop()
self.p()
if tag == 'li':
if start:
self.pbr()
if self.list: li = self.list[-1]
else: li = {'name':'ul', 'num':0}
self.o(" "*len(self.list)) #TODO: line up <ol><li>s > 9 correctly.
if li['name'] == "ul": self.o("* ")
elif li['name'] == "ol":
li['num'] += 1
self.o(`li['num']`+". ")
self.start = 1
else:
self.pbr()
if tag in ["table", "tr"] and start: self.p()
if tag == 'td': self.pbr()
if tag == "pre":
if start:
self.startpre = 1
self.pre = 1
else:
self.pre = 0
self.p()
def pbr(self):
if self.p_p == 0: self.p_p = 1
def p(self): self.p_p = 2
def o(self, data, puredata=0, force=0):
if self.abbr_data is not None: self.abbr_data += data
if not self.quiet:
if puredata and not self.pre:
data = re.sub('\s+', ' ', data)
if data and data[0] == ' ':
self.space = 1
data = data[1:]
if not data and not force: return
if self.startpre:
#self.out(" :") #TODO: not output when already one there
self.startpre = 0
bq = (">" * self.blockquote)
if not (force and data and data[0] == ">") and self.blockquote: bq += " "
if self.pre:
bq += " "
data = data.replace("\n", "\n"+bq)
if self.start:
self.space = 0
self.p_p = 0
self.start = 0
if force == 'end':
# It's the end.
self.p_p = 0
self.out("\n")
self.space = 0
if self.p_p:
self.out(('\n'+bq)*self.p_p)
self.space = 0
if self.space:
if not self.lastWasNL: self.out(' ')
self.space = 0
if self.a and ((self.p_p == 2 and LINKS_EACH_PARAGRAPH) or force == "end"):
if force == "end": self.out("\n")
newa = []
for link in self.a:
if self.outcount > link['outcount']:
self.out(" ["+`link['count']`+"]: " + urlparse.urljoin(self.baseurl, link['href']))
if link.has_key('title'): self.out(" ("+link['title']+")")
self.out("\n")
else:
newa.append(link)
if self.a != newa: self.out("\n") # Don't need an extra line when nothing was done.
self.a = newa
if self.abbr_list and force == "end":
for abbr, definition in self.abbr_list.items():
self.out(" *[" + abbr + "]: " + definition + "\n")
self.p_p = 0
self.out(data)
self.lastWasNL = data and data[-1] == '\n'
self.outcount += 1
def handle_data(self, data):
if r'\/script>' in data: self.quiet -= 1
self.o(data, 1)
def unknown_decl(self, data): pass
def wrapwrite(text): sys.stdout.write(text.encode('utf8'))
def html2text_file(html, out=wrapwrite, baseurl=''):
h = _html2text(out, baseurl)
h.feed(html)
h.feed("")
return h.close()
def html2text(html, baseurl=''):
return optwrap(html2text_file(html, None, baseurl))
if __name__ == "__main__":
baseurl = ''
if sys.argv[1:]:
arg = sys.argv[1]
if arg.startswith('http://'):
baseurl = arg
j = urllib.urlopen(baseurl)
try:
from feedparser import _getCharacterEncoding as enc
except ImportError:
enc = lambda x, y: ('utf-8', 1)
text = j.read()
encoding = enc(j.headers, text)[0]
if encoding == 'us-ascii': encoding = 'utf-8'
data = text.decode(encoding)
else:
encoding = 'utf8'
if len(sys.argv) > 2:
encoding = sys.argv[2]
data = open(arg, 'r').read().decode(encoding)
else:
data = sys.stdin.read().decode('utf8')
wrapwrite(html2text(data, baseurl))
* CCD (Conky CountDown) => Merci à Levi59
#!/bin/bash
# Conky CountDown - Script de compte à rebours
upd=$(stat -c %Y $HOME/Flux/VDM/VDM_conky)
act=$(date +%s)
diff=$(($act-$upd))
echo $diff
exit 0
Et voilà !! Désolé pour l'indigestion de ce post !!
Toutefois, s'il manque encore des infos, n'hésitez pas à demander
Dernière modification par Fenouille84 (Le 05/06/2010, à 12:04)
Hors ligne
#2223 Le 05/06/2010, à 15:58
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
@Fenouille84 bonjour ya t il un moyen d'insérer dans le conky cet ligne :
${if_mounted /media/LEXAR}${fs_used_perc /media/LEXAR}% ${fs_bar 5,50 /media/LEXAR} ${fs_free /media/LEXAR}${else} -- unplugged --${endif}
Sachant que le disque externe n est pas dans le fstab et qu'il se monte automatiquement sans passer par le fstab
Asus U80V Obuntu 10.04 64bits
Hors ligne
#2224 Le 05/06/2010, à 16:09
Fenouille84
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
T'as une clé USB Lexar ?
Je ne vois pas trop à quoi correspondent mtab, fstab & cie...
Edit :
Mes clés USB ne s'affichent pas dans le fstab, mais plutôt dans le mtab.
Je sais pas si sa t'aide...
Dernière modification par Fenouille84 (Le 05/06/2010, à 16:12)
Hors ligne
#2225 Le 05/06/2010, à 16:22
soza971
Re : (3) Conky : Postez vos conkyrc ou certaines parties intéressantes
bien sur je vais modifier le chemin ainsi que le nom du disque, c pour un disque western digital my passport
Asus U80V Obuntu 10.04 64bits
Hors ligne
|
I'm trying to make a custom search form using django haystack, i just modify from haystack's documentation :
forms.py
from django import forms
from haystack.forms import SearchForm
class DateRangeSearchForm(SearchForm):
start_date = forms.DateField(required=False)
end_date = forms.DateField(required=False)
def search(self):
# First, store the SearchQuerySet received from other processing.
sqs = super(DateRangeSearchForm, self).search()
# Check to see if a start_date was chosen.
if self.cleaned_data['start_date']:
sqs = sqs.filter(pub_date__gte=self.cleaned_data['start_date'])
# Check to see if an end_date was chosen.
if self.cleaned_data['end_date']:
sqs = sqs.filter(pub_date__lte=self.cleaned_data['end_date'])
return sqs
to :
from django import forms
from haystack.forms import HighlightedModelSearchForm
class CustomSearchForm(HighlightedModelSearchForm):
title = forms.CharField(max_length = 100, required = False)
content = forms.CharField(max_length = 100, required = False)
date_added = forms.DateField(required = False)
post_by = forms.CharField(max_length = 100, required = False)
def search(self):
sqs = super(CustomSearchForm, self).search()
if self.cleaned_data['post_by']:
sqs = sqs.filter(content = self.cleaned_data['post_by'])
if self.cleaned_data['title']:
sqs = sqs.filter(content = self.cleaned_data['title'])
if self.cleaned_data['content']:
sqs = sqs.filter(content = self.cleaned_data['content'])
if self.cleaned_data['date_added']:
sqs = sqs.filter(content = self.cleaned_data['date_added'])
return sqs
haystack .urls :
urlpatterns = patterns('haystack.views',
url(r'^$', search_view_factory(view_class = SearchView, form_class = CustomSearchForm), name='haystack_search'),
)
when i go to the url, it says : AttributeError at /search/
'CustomSearchForm' object has no attribute 'cleaned_data'
can you guys help me? thx
Then i try to comment the search method, but when i submit a word into the custom field, the result is always nothing, only when i submit a word to non-custom field it can gimme the result i want, already tried to understand this all day long, pls help
|
Readers should notice that the key= method:
ut.sort(key=lambda x: x.count, reverse=True)
is many times faster than adding rich comparison operators to the objects. I was surprised to read this (page 485 of "Python in a Nutshell"). You can confirm this by running tests on this little program:
#!/usr/bin/env python
import random
class C:
def __init__(self,count):
self.count = count
def __cmp__(self,other):
return cmp(self.count,other.count)
longList = [C(random.random()) for i in xrange(1000000)] #about 6.1 secs
longList2 = longList[:]
longList.sort() #about 52 - 6.1 = 46 secs
longList2.sort(key = lambda c: c.count) #about 9 - 6.1 = 3 secs
My, very minimal, tests show the first sort is more than 10 times slower, but the book says it is only about 5 times slower in general. The reason they say is due to the highly optimizes sort algorithm used in python (timsort).
Still, its very odd that .sort(lambda) is faster than plain old .sort(). I hope they fix that.
|
Composite Manager Retained Drawing Protocol RFC
Robert Carr
02/28/07
Outline and justification:
Results from development in the creation of 'first generation' mainstream composite window managers has outlined the need for several reconsiderations in regards to applications interacting and communicating with the composite manager. Currently code can interact with a composite manager in one of several fashions:
Composite managers can expose a plugin interface enabling libraries functioning as plugins to implement new functionality for a composite manager. This is effective for many of the traditional window manager functionalities, but has several weaknesses. Slow code in the composite manager will make an entire system feel sluggish, and furthermore under existing and easily forseeable plugin architectures plugins such as the Thumbnail plugin of Beryl and Compiz waste an excessive amount of CPU even while not doing anything interesting. Furthermore applications requiring an excessive amount of user interaction are cumbersome and difficult to write, and attempts to do so often expose the problem of slow code in a composite manager making the entire system feel sluggish. Lastly applications requiring a great deal of interaction with other components of the desktop are not well suited to existing as a plugin for a compositor, while they still might find it useful to leverage some of the power of a compositor.
Composite managers can communicate with client applications through X window properties. This is a suitable way to store simple information and flags, but the use quickly breaks down when attempting to traffic large (things such as Pixmaps or a vertex array) amounts of information in a latency sensitive manner.
Composite managers can communicate with client applications through an interprocess communications protocol such as DBUS. This is a suitable way for triggering actions, and a sort of 'scripting' but suffers from similar issues to that of X properties in leveraging the compositor for producing rich client applications.
As a solution this document proposes the creation of a low level protocol used to leverage the composite manager for drawing purposes. Composite managers would expose a struct in shared memory where client applications could queue 'Requests' to specify specific drawing operations and state changes. The proposed protocol is lightweight and generic allowing for implementation in a multitude of composite managers while maintaining ABI compatibility.
Structure of implementation and low level protocol description
The protocol will be built upon Requests structured as following:
An enumeration of named opcodes for each request.
For each opcode a structure representing any parameters or attributes which the client application must communicate to the compositor to complete the specific request
A structure representing an entire request containing the opcode, and a union of attribute structures for all opcodes. Furthermore the request structure will contain a pointer to the next and previous request.
Composite managers will expose a struct in shared memory (henceforth referred to as the transport) (TODO: Best method to establish initial communication?) containing:
A doubly linked list of requests, client applications will add requests to the end of this list and the composite manager will clear requests from the list upon executing them.
A struct representing the last executed request and opcodes for any return information which this request may have generated. The return information will be a pointer to a struct made accessible of a type defined by the particular request.
A lock on the structure to prevent issues with multiple clients accessing the structure. Ideally clients will interact with the protocol through a higher level library.
Composite managers will be required to ensure that at each paint all requests are cleared EXCEPT in the case where a lock exists on the transport (Though adding a request will not neccesarily trigger a paint) (TODO: Damage attribute on transport/something?). Furthermore the composite manager will be required to guarantee that actions occur in the order they were added. (TODO: Client attribute to requests to enable sorting of requests when clients don't respect the lock?). Respect of the lock on the transport must be observed at the client level.
Opcodes
The first version of the CMRD protocol defines 13 opcodes.
Opcode 0:
Name: CompositeManagerQueryProtocolVersionRequest
Attributes: None
Return attributes: int major; int minor;
Description and implementation notes: Returns the major and minor version of the implemented protocol.
Opcode 1:
Name: CompositeManagerReadyRequest
Attributes: None
Return Attributes: int ready;
Description and implementation notes: Indicates whether the composite manager is ready and able to respond to drawing requests.
Opcode 2:
Name: CompositeManagerScreengrabLockExistsRequest
Attributes: TODO
Return attributes: int exists;
Description and implementation notes: The concept of a 'Screengrab Lock' is used to indicate whether a client is using the composite manager for a fullscreen and screengrabbing effect, where it would not be appropriate for other clients to do the same. An actual X grab may or may not be issued based on attributes.
Opcode 3:
Name: CompositeManagerPushScreengrabLockRequest
Attributes: TODO
Return attributes: TODO
Description and implementation notes: Pushes a screengrab lock.
Opcode 4:
Name: CompositeManagerPopScreengrabLockRequest
Attributes: None
Return attributes: None
Description and implementation notes: Pops the active screengrab lock.
Opcode 5:
Name: CompositeManagerSetDrawingLevelRequest
Attributes: Window level; int screen;
Return attributes: None
Description and implementation notes: Sets the current drawing level, in that further drawing requests will be rendered at the same level as the 'Window level' attribute, or above all windows if 'Bool screen' is true.
Opcode 6:
Name: CompositeManagerSetDamageResponseRequest
Attributes: int respond;
Return attributes: None
Description and implementation notes: Temporarily toggles the composite manager from repainting damaged areas.
Opcode 7:
Name: CompositeManagerSetActiveTextureFromWindowRequest
Attributes: Window window;
Return attributes: None
Description and implementation notes: Creates a texture from the CURRENT drawable of the window and uses it for future retained drawing operations.
Opcode 8:
Name: CompositeManagerSetActiveTextureFromPixmapRequest
Attributes: Pixmap pixmap;
Return attributes: None
Description and implementation notes: BINDS a texture from the passed pixmap and uses it for future retained drawing operations.
Opcode 9:
Name: CompositeManagerSetRenderScreenOffscreenRequest
Attributes: int offscreen
Return attributes: None
Description and implementation notes: Based on the value of offscreen begin rendering the screen to an offscreen framebuffer object or GLXPBuffer (based on availability of FBO), and set the active texture to a texture generated from such rendering.
Opcode 10:
Name: CompositeManagerSetCurrentVertexArrayRequest
Attributes: float * vertices; int nvertices;
Return attributes: None
Description and implementation notes: float * vertices should be a 0 offset array of vertices in the format x, y, z. This is used as the geometry for future drawn objects. It is to be assumed that the vertices are in screen coordinates, with 0, 0 (x, y) being the TOP LEFT of the screen.
Opcode 11:
Name: CompositeManagerSetCurrentTextureArrayRequest
Attributes: float * coords; int ncoords;
Return attributes: None
Description and implementation notes: Same format asCompositeManagerSetCurrentVertexArraywithout the z coordinate. This is used as the texture coordinates for future drawn objects.
Opcode 12:
Name: CompositeManagerDrawRequest
Attributes: None
Return attributes: None
Description and implementation notes: Enable the current retained drawing texture, and render the geometry in the current vertex array with the texture coordinates in the current texture array.
Opcode 13:
Name: CompositeManagerDamageScreenRequest
Attributes: None
Return attributes: None
Description and implementation notes: Force the composite manager to redraw the screen.
Example Usage
Drawing a window thumbnail for Window id at 300, 300 on a quad with width and height 100.
Requests:
CompositeManagerSetDrawingLevelRequest ( Window level = 0, screen = TRUE)
CompositeManagerSetActiveTextureFromWindowRequest ( Window id)
CompositeManagerSetCurrentVertexArrayRequest ( float * vertices = 300, 300, 0, 300, 400, 0, 400, 400, 0, 400, 300, 0 nvertices = 4)
CompositeManagerSetCurrentTextureArrayRequest ( float * coords = 0,0,0,1,1,1,1,0 ncoords = 4)
CompositeManagerDrawRequest
Drawing a thumbnail of what the screen would look like on a different viewport at 300, 300 on a quad with width and height 100.
Requests:
CompositeManagerSetRenderOffscreenRequest offscreen = 1
CompositeManagerDamageScreenRequest
CompositeManagerSetCurrentVertexArrayRequest ( float * vertices = 300, 300, 0, 300, 400, 0, 400, 400, 0, 400, 300, 0 nvertices = 4)
CompositeManagerSetCurrentTextureArrayRequest ( float * coords = 0,0,0,1,1,1,1,0 ncoords = 4)
CompositeManagerDrawRequest
|
I can't figure out why this isn't working. I'm trying to send an email from my school email address with this code I got online. The same code works for sending from my GMail address. Does anyone know what this error means? The error occurs after waiting for about one and a half minutes.
import smtplib
FROMADDR = "FROM_EMAIL"
LOGIN = "USERNAME"
PASSWORD = "PASSWORD"
TOADDRS = ["TO_EMAIL"]
SUBJECT = "Test"
msg = ("From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n"
% (FROMADDR, ", ".join(TOADDRS), SUBJECT) )
msg += "some text\r\n"
server = smtplib.SMTP('OUTGOING_SMTP', 465)
server.set_debuglevel(1)
server.ehlo()
server.starttls()
server.login(LOGIN, PASSWORD)
server.sendmail(FROMADDR, TOADDRS, msg)
server.quit()
And here's the error I get:
Traceback (most recent call last):
File "emailer.py", line 13, in
server = smtplib.SMTP('OUTGOING_SMTP', 465)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/smtplib.py", line 239, in init
(code, msg) = self.connect(host, port)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/smtplib.py", line 295, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/smtplib.py", line 273, in _get_socket
return socket.create_connection((port, host), timeout)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 514, in create_connection
raise error, msg
socket.error: [Errno 60] Operation timed out
|
by Brian Nickel <http://kerrick.wordpress.com>
Information on how to configure the FastCGI support for the Lighttpd server.
Lighttpd (pronounced “lighty”) is a popular lightweight and easy to configure HTTP server. Adding ASP.NET support through fastcgi-mono-server is very quick and painless and can be done by modifying only three files.
An earlier version of these configuration instructions was tested on the following systems with an earlier version of Mono. These instructions should still work on those systems but have not been tested there:
Before doing anything else, you should read FastCGI’s important information on the main page.
The server is enabled through the FastCGI module. To enable the module, open /etc/lighttpd/modules.conf (or if that file does not exist open /etc/lighttpd/lighttpd.conf) and search for the following block:
##
## FastCGI (mod_fastcgi)
##
#include "conf.d/fastcgi.conf"
If you find it, you need only uncomment the include line. If you don’t find that line, or anything like it, simply add the following line to end of the file:
include "conf.d/fastcgi.conf"
Now that the server is enabled, it takes just a handful of lines to configure it.
Your distribution should have included a file /etc/lighttpd/conf.d/fastcgi.conf in the installation; if not, add it. This is the largest and most important part of the configuration. It consists of two pieces, which will be discussed in detail, and by the time you are finished the file will look something like this:
include "conf.d/mono.conf"
server.modules += ( "mod_fastcgi" )
fastcgi.server = (
"" => ((
"socket" => mono_shared_dir + "fastcgi-mono-server",
"bin-path" => mono_fastcgi_server,
"bin-environment" => (
"PATH" => "/bin:/usr/bin:" + mono_dir + "bin",
"LD_LIBRARY_PATH" => mono_dir + "lib:",
"MONO_SHARED_DIR" => mono_shared_dir,
"MONO_FCGI_LOGLEVELS" => "Standard",
"MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log",
"MONO_FCGI_ROOT" => mono_fcgi_root,
"MONO_FCGI_APPLICATIONS" => mono_fcgi_applications
),
"max-procs" => 1,
"check-local" => "disable"
))
)
So without further ado…
This file must have the following line in it, otherwise it will not work:
server.modules += ( "mod_fastcgi" )
If the file was included in your distribution, it would be near the very top. If not, make sure you add it. This tells Lighttpd to load the module when it starts up.
The next step is to add a server for the “.aspx” extension. Do a quick search for “fastcgi.server”. If found, it will probably look something like the following:
fastcgi.server = (
".php" => ((
"socket" => "/tmp/php-fastcgi.socket",
"bin-path" => "/usr/local/bin/php",
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "16",
"PHP_FCGI_MAX_REQUESTS" => "10000"
)
))
)
If you have it, you’re going to want to add a new extension to it so it looks like the following:
fastcgi.server = (
".php" => ((
"socket" => "/tmp/php-fastcgi.socket",
"bin-path" => "/usr/local/bin/php",
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "16",
"PHP_FCGI_MAX_REQUESTS" => "10000"
)
)),
"" => ((
# TO BE ADDED
"check-local" => "disable"
))
)
Otherwise, if it doesn’t exist, just add the following block:
fastcgi.server = (
"" => ((
# TO BE ADDED
"check-local" => "disable"
))
)
This is the beginning of a server definition for the root directory. "" looks a little odd, but adding a trailing slash to the directory name dramatically alters how Lighttpd sends the request paths. You will be adding implementation specific settings where “# TO BE ADDED”. The "check-local" line tells Lighttpd to send all requests to the Mono server regardless of whether or not the the file exists on disk. This is needed for some features of ASP.NET 2.0.
There are two recommended server implementations for the Mono server. The first has Lighttpd automatically spawn the child server when it starts and communicate over Unix sockets. This has the advantage of being easy to set up, being secure by limiting access to just Lighttpd, and having the performance boost provided by Unix sockets. The second has Lighttpd communicate via TCP sockets with an existing Mono server somewhere on the network. This has the advantage of being able to run the Mono server on an entirely different machine than Lighttpd and all the performance and logistical advantages associated with that. If you’re just setting up a personal server or not trying anything fancy, I would recommend using automatic spawning, and if you’re using a high bandwidth, multimachine setup, I would recommend using TCP and running the server on another system.
Where you previously added “# TO BE ADDED”, replace it with the following:
"socket" => mono_shared_dir + "fastcgi-mono-server",
"bin-path" => mono_fastcgi_server,
"bin-environment" => (
"PATH" => "/bin:/usr/bin:" + mono_dir + "bin",
"LD_LIBRARY_PATH" => mono_dir + "lib:",
"MONO_SHARED_DIR" => mono_shared_dir,
"MONO_FCGI_LOGLEVELS" => "Standard",
"MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log",
"MONO_FCGI_ROOT" => mono_fcgi_root,
"MONO_FCGI_APPLICATIONS" => mono_fcgi_applications
),
"max-procs" => 1,
That configuration uses several mono_* configuration variables to control the how the FastCGI server starts and runs. To set those configuration variables add the following line to the top of fastcgi.conf:
include "conf.d/mono.conf"
and create conf.d/mono.conf to contain the following:
# Add index.aspx and default.aspx to the list of files to check when a
# directory is requested.
index-file.names += ( "index.aspx", "default.aspx" )
### The directory that contains your Mono installation.
# The "bin" subdir will be added to the PATH and the "lib" subdir will be
# added to the LD_LIBRARY_PATH.
# For a typical system-wide installation on Linux, use:
var.mono_dir = "/usr/"
# For an installation in a user account (lighttpd need read/exec access):
#var.mono_dir = "/home/username/mono-1.2.6/"
### A directory that is writable by the lighttpd process.
# This is where the log file, communication socket, and Mono's .wapi folder
# will be created.
# For a typical system-wide installation on Linux, use:
var.mono_shared_dir = "/tmp/"
# For an installation in a user account (dir must exist and be writable):
#var.mono_shared_dir = "/home/username/lighttpd_scratch/"
### The path to the server to launch to handle FASTCGI requests.
# For ASP.NET 1.1 support use:
var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server"
# For ASP.NET 2.0 support use:
#var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server2"
### The root of your applications
# For apps installed under the lighttpd document root, use:
var.mono_fcgi_root = server.document-root
# For apps installed in a user account, use something like:
#var.mono_fcgi_root = "/home/username/htdocs/"
### Application map
# A comma separated list of virtual directory and real directory
# for all the applications we want to manage with this server. The
# virtual and real dirs. are separated by a colon.
var.mono_fcgi_applications = "/:."
Read the comments in the mono.conf and edit as appropriate for your site. If you are installing a single app directly into the lighttpd document root and Mono is installed as part of your distribution, you shouldn’t need to change anything.
Where you previously added “# TO BE ADDED” in fastcgi.conf, replace it with the following:
"host" => "192.168.0.3",
"port" => 9000,
"docroot" => "/root/on/remote/machine",
"host" specifies the host on which theserver is running. You will want to replace it with the actual IP address."port" specifies the port on which theserver is running. For this example, the ASP.NET server could have been started with the following command:
/usr/bin/fastcgi-mono-server /socket=tcp:9000
"docroot" specifies the document root
Sending all requests to ASP.NET adds some extra overhead which may not be desirable for sending large static files. It additionally prevents PHP (and other scripts) from working. The following advanced topics overcome these obstacles by enclosing the fastcgi.server definition in a Conditional Configuration.
As this prevents requests from being handled by ASP.NET, the requests do not employ ASP.NET’s security features and extra security measures should be applied.
The following example prevents files in the /downloads/ and /images/ directories from being sent to ASP.NET:
server.modules += ( "mod_fastcgi" )
$HTTP["url"] !~ "^/(downloads|images)/" {
fastcgi.server = (
"" => ((
... same as before ...
))
)
}
The following example limits ASP.NET to running on www.example.com and example.com:
server.modules += ( "mod_fastcgi" )
$HTTP["host"] =~ "^(www\.|)example\.com$" {
fastcgi.server = (
"" => ((
... same as before ...
))
)
}
The following example sends .php requests to a PHP FastCGI server and the rest to ASP.NET:
server.modules += ( "mod_fastcgi" )
$HTTP["url"] !~ "\.php$" {
fastcgi.server = (
"" => ((
... same as before ...
))
)
}
fastcgi.server = (
".php" => ((
"socket" => "/tmp/php-fastcgi",
"bin-path" => "/srv/www/cgi-bin/php5",
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "16",
"PHP_FCGI_MAX_REQUESTS" => "10000"
)
))
)
Using Extensions in place place of paths is NOT recommended. Please consult “[../index.html#info1 Paths vs. Extensions]” on the main page for an in depth explanation. If you decide to use this configuration, please bear in mind that it is less secure suffers additional disadvantages when compared to using paths.
To start, change the extension that triggers the mono FastCGI server from “” to “.aspx”. So that your fastcgi.conf file looks like this:
fastcgi.server = ( ... possibly other extensions like ".php" ... ".aspx" => (( ... same as before ... )))
ASP.NET uses many extensions for its many different features. It uses “.ashx” for handlers, “.soap” for SOAP, and you really don’t want anyone downloading your “.dll” files, do you?
The hard way to add a new extension is to copy and paste what your server configuration, replacing “.aspx” with “.asmx”, etc. The easy way is to add a extension map, so Lighttpd just treats “.asmx” as “.aspx”. As before, you are going to want to look for “fastcgi.map-extensions”. If found, it will probably look something like the following:
fastcgi.map-extensions = ( ".php3" => ".php" )
If you have it, you’re going to want to add a new extension to it so it looks like the following:
fastcgi.map-extensions = (
".php3" => ".php"<b>,
".asmx" => ".aspx",
".ashx" => ".aspx",
".asax" => ".aspx",
".ascx" => ".aspx",
".soap" => ".aspx",
".rem" => ".aspx",
".axd" => ".aspx",
".cs" => ".aspx",
".config" => ".aspx",
".dll" => ".aspx"</b>
)
Otherwise, if it doesn’t exist, just add the following block:
fastcgi.map-extensions = (
".asmx" => ".aspx",
".ashx" => ".aspx",
".asax" => ".aspx",
".ascx" => ".aspx",
".soap" => ".aspx",
".rem" => ".aspx",
".axd" => ".aspx",
".cs" => ".aspx",
".config" => ".aspx",
".dll" => ".aspx"
)
You should now have ASP.NET working with Lighttpd. Enjoy!
|
Does anybody knows what are the 10 lines of C code mentioned by Prof. Sebastian to implement the robot localization using particle filters? It would be very useful seeing such code.
asked
jorgerr
I really doubt these are 10 lines of C code. His algo was about 10 lines. So if you rely on a library to sample a pdf and that you remove all carriage returns from your code, you may end up in one line indeed... but otherwise, I wouldn't expect an implementation in less a 30-40 lines without loosing in readability and without using external libraries.
answered
WhitAngl
Perhaps they are very loooooong lines. :-D
answered
Finix
I'll take a guess, though using Pythonish syntax:
num_particles = 10000
particles = [[random(width),random(height),random(360)] for i in range(num_particles)]
# make a bunch of initial particles. The ones stuck in walls will die soon enough.
while( stillMoving ):
# move
for each x, y, direction in particles:
turn, move = move_robot()
distance = move * random.guass(mu= 1.0, sigma = 0.6)
direction = (turn + direction + turn * random.guass(mu = 0, signma = 3.0) % 360
new_x, new_y = x + cos(direction) * distance), y + sin(direction) * distance
if intersect_wall(map, (x,y), (new_x, new_y)):
particle = random.choice(particles)
x, y = new_x, new_y
# move. For each particle pick a different amount of distance and direction skew.
# if it moved through a wall. It's dead. Maybe just copy a random live particle
for distance, angle in scan_with_sensors():
if math.abs(distance - nearest_wall(map, x, y, direction + sensor.angle)) > 1.8:
particle = random.choice(particles)
# sense with range finders and kill particles where distance is off by too much.
OK, so this is one is fourteen lines plus comments, or eleven without the "move through a wall" step: those particles will die from sensor problems. The basic idea here was "move each particle by picking a random state transition. If it moves through a wall or disagrees with a range finder by more the 1.8, then that particle is an inconsistent belief and I can copy another particle and start diverging on the next step.
Given this is one pass without running it, what are the bugs and errors in thinking?
answered
CharlesMerriam
It's probably about ten lines of code, all right, just without the housekeeping.
It's a bit like how pseudocode for the essence of the Bresenham line drawing algorithm (minus the octant selection) can be written on a postage stamp. The Bresenham circle drawing algorithm's only a wee bit bigger than that (It's so cool that it uses no transcendental functions or floating point, yet generates stable, accurate and nice looking circles!).
But the housekeeping code required for any of that on any real platform is huge and unfortunately tedious to write and is platform specific.
answered
Blancmange
I'm guessing he's talking about: unit 11-20 -- but even then it's just pseudo code.
Though, from an abstract point of view, considering only the logic itself, minus all the implementation specific stuff (like state objects, control objects, probability & sampling functions, etc...) probably could be done in 10 lines of code.
answered
Michael Jensen
|
I just moved from apache prefork to worker and started running mod_wsgi in daemon mode. So far, so good. I haven't experienced max load yet, but the server seems more consistent and we're not seeing random requests take 2min waiting for a mod_wsgi response. Memory footprint has gone from 3.5G to 1G. This is awesome. We're running on a single VPS with 6G of ram. There's one Django app running on this sevrer along with an instance of memcache, to which we've allocated 1G of ram. We have a separate MySql server.
Our application is bulky and can certainly be optimized. We're using NewRelic to troubleshoot some of the more slow running pages now. I've read a lot on optimizing mod_wsgi/apache but, like everyone else, I'm still left with a few questions.
Our average application page load time is 650-750ms. A lot of our pages are in the 200ms range, but we've got some dogs that take 2-5+ seconds to load. We get around 15-20 requests/second during normal load times and 30-40 requests/second during peak times, which may last 30-60 minutes.
Here's my apache config, running worker mpm.
StartServers 10MaxClients 400MinSpareThreads 25MaxSpareThreads 75ThreadsPerChild 25MaxRequestsPerChild 0
I started out with the defaults (StatServers=2 and MaxClients=150) but our site slowed way down under minimal load. I'm guessing it took a long time to spin up servers as requests came in. We're serving 90% of our media from s3. The other 10% are served through Apache on our https pages or someone pointing lazily to our local server. At nominal load, 15 worker processes end up being created, so I'm thinking I should probably just set StartServers=15? With this configuration I'm assuming I have 15 worker processes running (which I can confirm with NewRelic) with 25 threads each (which I don't know how to confirm, guessing 400/15).
My apache/mod_wsgi directives look like this:
<VirtualHost *:80>
# Some stuff
WSGIDaemonProcess app1 user=http group=http processes=10 threads=20
WSGIProcessGroup app1
WSGIApplicationGroup app1
WSGIScriptAlias / /path/to/django.wsgi
WSGIImportScript /path/to/django.wsgi process-group=app1 application-group=app1
# Some more stuff
</VirtualHost>
<VirtualHost *:443>
# Some stuff
WSGIDaemonProcess app1-ssl user=http group=http processes=2 threads=20
WSGIProcessGroup app1-ssl
WSGIApplicationGroup app1-ssl
WSGIScriptAlias / /path/to/django.wsgi
WSGIImportScript /path/to/django.wsgi process-group=app1-ssl application-group=app1-ssl
# Some more stuff
</VirtualHost>
Having a different WSGIDaemonProcess/WSGIProcessGroup for the ssl side of my site, well, that just doesn't feel right at all. I'm 100% sure I've mucked something up here. To the greater point though, I've allocated 200+40 threads for mod_wsgi to handle requests from Apache, leaving 160 threads to deal with whatever media needs to be delivered up (through ssl or laziness of not pointing to s3).
So given our application load above, can anyone suggest ways I can improve performance of my site? Am I dealing with the ssl/mod_wsgi directives properly? Where's Graham? ;)
|
what is the method name that gets executed every time a member of a class is updated?
for example, init is run when an object is instantiated:
class Foo(db.Model)
id = db.Column(db.Integer, primary_key=True)
description = db.Column(db.String(50))
def __init__(self, description):
self.description = description
i would like to add a method to this class that runs every time i update a Foo object.
after reading up on python classes here:
i thought that the method i was looking for would look something like the below (but haven't gotten it working yet):
class Foo(db.Model)
id = db.Column(db.Integer, primary_key=True)
description = db.Column(db.String(50))
def __init__(self, description):
self.description = description
def __call__(self, description):
print 'obj is getting updated!'
self.description = description
thanks for the help!
|
Is there any reason why printw() would cause a segmentation fault?
Code is fine without it; broken with it. It doesn't seem to be doing anything esoteric, so I'm not sure how to even begin to understand what is wrong here.
Thanks in advance for any advice!
#include <ncurses.h>
...
initscr();
noecho();
cbreak();
...
void draw_court()
{
move(TOP_ROW-1, LEFT_COL+4);
printw("LIVES REMAINING: 3");
int i;
for (i = 0; i < RIGHT_COL; i++)
mvaddch(TOP_ROW, LEFT_COL+i, H_LINE);
for (i = 1; i < BOT_ROW-TOP_ROW; i++)
mvaddch(TOP_ROW+i, LEFT_COL, V_LINE);
for (i = 0; i < RIGHT_COL; i++)
mvaddch(BOT_ROW, LEFT_COL+i, H_LINE);
}
ETA: The stacktrace from gdb:
#0 0xb778a139 in _nc_printf_string () from /lib/libncurses.so.5
#1 0xb7785e04 in vwprintw () from /lib/libncurses.so.5
#2 0xb7785f63 in printw () from /lib/libncruses.so.5
#3 0x08048f23 in draw_court ()
#4 0x080489f4 in set_up ()
#5 0x0804890a in main ()
|
I try to using the Pyro4 on autotesting, but now I confused for some ability for Pyro4. Does there existed some method to get the system information by the Pyro4 object.
In my ideas, I expose a pyro object that can get the system information, and the remote machine can using this object to show the system information. But on my code, the remote machine always show the server information. I guess that I misunderstand or misuse the Pyro4
Sorry for this stupid question, i'm the newbie for Pyro4. The follow is my sample code. The server exposes the pyro object
# Server.py
#! /usr/bin/env python
import Pyro4
import os
class Ex(object):
def run(self):
print os.uname()
if __name__ == "__main__":
daemon = Pyro4.Daemon()
ns = Pyro4.locateNS()
uri = daemon.register(Ex())
ns.register("ex", uri)
print uri
daemon.requestLoop()
and the client using the Pyro object
# Remote
#! /usr/bin/env python
import Pyro4
if __name__ == "__main__":
uri = raw_input("input the uri> ")
fn = Pyro4.Proxy(uri)
fn.run()
p.s I know i can get the os information on the client side, but I want to use the Pyro object to get the information instead client does this itself.
|
DJ Raging-Bull
Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Bonjour,
J'ai récuperé un IBM ThinkPad 600X équipé d'un Penium III @ 500 MHz et de 446 Mo de RAM, le disque dur fait environ 12 Go et il dispose d'un lecteur CD.
J'aimerais le refiler à ma mère qui s'en servirait pour de la bureautique de base. Je lui ai donc adjoint une carte PCMCIA WiFi qui apparement viendrait d'une Freebox à la base mais fonctionnerait normalement sur un PC portable.
Mais voilà, elle n'est pas détectée. J'ai essayé une autre carte PCMCIA, une Ethernet cette fois, détectée, mais qui elle même est cassée vu qu'elle ne détecte pas le branchement d'un câble ethernet.
Il s'agit d'une carte PCMCIA Freebox:
Model : WPCB-152G
IEEE 802.11G Compliant
OFDM at 2.4 GHz Band
Le PC portable est un IBM ThinkPad 600X comme dit plus haut, de 2000 je crois. Il n'a pas de périphérique réseau embarqué, je n'ai que cette carte WiFi achetée sur leboncoin qui devrait normalement fonctionner, pour le connecter à internet.
Un peu d'aide ne serait pas de refus.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
michel_04
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Bonjour,
Extrait de la page http://doc.ubuntu-fr.org/wifi_broadcom_bcm43xx
Les cartes PCMCIA 16 bits à base de chip Broadcom ne fonctionnent pas avec le pilote bcm43xx. (par exemple les carte Wifi WPCB-104B et WPCB-152G Freebox) Pour ces cartes, il faut attendre la finalisation du pilote b43 disponible à partir du noyau 2.6.24 http://linuxwireless.org/en/users/Drivers/b43
Edit : Tu peux donner les infos (carte Wifi connectée) demandées en haut de la page d'accueil Wifi, merci.
A+
Dernière modification par michel_04 (Le 19/12/2012, à 13:14)
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Regarde sur cette page de la doc si tu ne l'as pas déjà fait...
Et zut, grilled.
Dernière modification par Ayral (Le 19/12/2012, à 13:16)
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Si j'ai bien tout compris ... j'ai besoin d'une connexion internet pour installer le pilote de la carte WiFi sans laquelle je n'ai pas de connexion avec ce PC portable.
M'enfin y a déjà un espoir que ça marche vu qu'il existe un pilote.
Y-a-t-il un moyen de récupérer le fichier, de le transférer via clé USB pour l'installer ensuite sur le PC qui n'a pas de connexion ?
Edit: je crois que j'ai trouvé Doc WiFi
J'ai vu qu'il y avait moyen de récuperer les pilotes sur le CD de la distribution directement, or je n'arrive pas à y accéder, j'ai pourtant installé Lubuntu 12.04 avec le lecteur CD, mais une fois démarré, quand je met le CD, il ne le détecte pas, donc je ne peux pas m'en servir comme source ... un peu embêtant.
Edit 2: bon finalement je crois que j'ai finis par trouvé, j'aurais mieux faire de mieux lire (m'enfin c'est en anglais aussi, je suis un peu excusable), je vais tester cette solution et je reviens pour dire comment ça c'est passé.
Dernière modification par DJ Raging-Bull (Le 19/12/2012, à 15:21)
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
chibbata
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
salut
si tu cherches a installer le firmware pour broadcom
telecharges le paquet ci dessous,copie le dans le dossier Documents de lubuntu
http://nl.archive.ubuntu.com/ubuntu/poo … 14_all.deb
ouvres un terminal
cd Documents
sudo dpkg -i linux-firmware-nonfree_1.14_all.deb
redemarres le pc
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Oui parce qu'en suivant ce qui est indiqué sur la doc Ubuntu en anglais, je n'arrive à aucun résultat.
J'ai téléchargé ton fichier, je l'ai mis sur clé usb, je l'ai exécuté directement de mon dossier home sur le PC en question mais je n'ai pas plus de résultat.
Après redémarrage toujours pas de périphérique réseau ... ça devient compliqué.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
chibbata
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
le paquet b43-fwcutter est içi http://nl.archive.ubuntu.com/ubuntu/poo … 4_i386.deb
le firmware est içi http://downloads.openwrt.org/sources/wl … 130.20.0.o
copie tout ça dans le dossier Documents de Lubuntu
cd Documents
sudo dpkg -i b43-fwcutter_015-14_i386.deb
sudo apt-get remove --purge linux-firmware-nonfree
cd /lib/firmware
sudo b43-fwcutter /home/????/Documents/wl_apsta-3.130.20.0.o
redemarres le pcps:remplace ???? par ton nom utilisateur
Dernière modification par chibbata (Le 19/12/2012, à 17:58)
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Toujours le même message: Aucun périphérique réseau disponible.
J'ai déjà lu des témoignages d'autres personnes dans mon cas, apparemment il est assez compliqué d'installer cette carte WiFi PCMCIA sous (L)Ubuntu.
Il n'y a pas beaucoup de documentation en français pour ce cas précis et même en suivant ce que je lis du côté anglais, ça va pas mieux.
Faudrait que je test un Windows 98 SE pour voir si la carte fonctionne, vu que je n'ai aucun autre PC portable équipé d'un port PCMCIA.
Ceci dit: la carte a une très légère lumière verte là ou ça devrait être allumé, donc je suppose que c'est alimenté, mais est-ce qu'elle fonctionne ...hm.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
chibbata
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
http://nl.archive.ubuntu.com/ubuntu/poo … 6_i386.deb
telecharges ce paquet et copie le dans le dossier Documents de lubuntu
cd Documents
sudo dpkg -i pcmciautils_018-6_i386.deb
post le resultat de
lspcmcia -v
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
fancesca@Mistral-Notebook:~$ lspcmcia -v
Socket 0 Bridge: [yenta_cardbus] (bus ID: 0000:00:02.0)
Configuration: state: on ready: unknown
Voltage: 3.3V
Vcc: 3.3V
Vpp: 3.3V
Socket 0 Device 0: [-- no driver --] (bus ID: 0.0)
Configuration: state: on
[io 0x0000 flags 0x100]
[io 0x0000 flags 0x100]
[mem 0x00000000 flags 0x200]
[mem 0x00000000 flags 0x200]
[mem 0x00000000 flags 0x200]
[mem 0x00000000 flags 0x200]
Product Name: Broadcom
802.11g PCMCIA
4.5
Identification: manf_id: 0x02d0 card_id: 0x0425
function: 6 (network)
prod_id(1): "Broadcom
" (0x966ba416)
prod_id(2): "802.11g PCMCIA
" (0x645b51ca)
prod_id(3): "4.5
" (0x0f7e2fb4)
prod_id(4): --- (---)
Socket 1 Bridge: [yenta_cardbus] (bus ID: 0000:00:02.1)
Configuration: state: on ready: unknown
Du coup elle a bien l'air d'être détectée mais de ne pas avoir de pilote.
Dernière modification par DJ Raging-Bull (Le 20/12/2012, à 10:41)
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
chibbata
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
salut
sudo rmmod b43
sudo modprobe b43legacy
post le resultat de
sudo lshw -C network
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Première commande: ERROR: Module b43 does not exist in /proc/modules
Les autres commandes ne donnent absolument aucun résultat.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
chibbata
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
sudo rmmod b43legacy
sudo modprobe b43
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Aucun résultat ni pour l'une ni pour l'autre.
J'ai l'impression que j'ai du mal installer le pilote ou un truc du genre.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Je veux pas vexer, mais bon pour 3 francs six sous (entre 10 et 40 € pour usage domestique à la Fnac, mais sans doute il y a moins cher.
Tiens en voilà une à 10 € sur le bon coin... http://www.leboncoin.fr/informatique/40 … tm?ca=16_s Franchement ça va faire la 2ème journée que tu rames, et perso si c'était moi j'abandonnerais pour une clé usb... Mais tu as peut être raison de persévérer!
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Bah la carte que j'ai là je l'ai prise à 5€ sur leboncoin, puis maintenant que je l'ai ça m'embêterais de la jeter pour en acheter une autre, surtout que ça fait partie d'un cadeau de Noël.
Bon si ça continue comme ça c'est sur que je dépasserais la date ... mais bon, j'ai encore 3 ou 4 jours pour gérer
Edit: et puis il ne s'agit pas d'une clé USB mais d'une carte PCMCIA et y a une raison à ça, c'est que l'ordinateur ne dispose que d'un seul port USB 1.0 malheureusement, sinon ça serait déjà réglé.
Dernière modification par DJ Raging-Bull (Le 21/12/2012, à 11:23)
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Effectivement, là ça change des choses. J'ai un mini hub usb, m'en sers pas, 7 ports usb sur ma tour, mais Lorient c'est un peu loin du Tarn. Dommage.
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
C'est gentil d'y avoir penser ! Mais la carte fonctionne, elle est détecté par le système mais n'est pas exploitée faute de pilote, je suppose que je finirais par trouver une solution et au pire je réglerais ça après Noël même si j'aurais voulu que ça soit livré fonctionnel directement.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Bon en ce cas bonnes fêtes en attendant !.
Je suis en train d'essayer de mettre un linux sur un (très) vieux Toshiba Satellite. Xubuntu ça va pas. Je vais tenter Lubuntu, après tant pis je sais que toutoulinux marche.
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Ouai, Toutou Linux ça marche bien même pour les PC les plus vieux. Sur le mien Lubuntu ça va plutôt bien mais c'est parce qu'il y a 480 Mo de RAM, sinon je me doute que ça aurait été un peu chaud.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Pour Xubuntu, zéro à la barrer.
Pour Lubuntu, même punition même motif
Pour Toutou, c'est en train de s'installer.
Tiens au fait, la bécane apparteneit à ma sœur, et elle m'a refilé 3 cartes pcmcia: 1 adaptateur wifi 54 Mbps Bluestork, 1 pareil mais TrendNet, et une carte éthernet qui est reconnue.
C'est pas aussi simple (l'install) que Ubuntu On verra
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Si tu en as une qui est reconnue sous Linux, ethernet ou wifi et que tu n'en as pas besoin, dis moi ton prix, ça peut m'intéresser.
Parce que là je galère, j'vais refaire tout ce qu'il y a de noté dans la doc anglaise et si ça foire toujours, j'aurais pas d'autre choix.
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Là j'ai laissé tomber pour le moment, parce je n'arrive pas à mettre le grub là où il faut pour que ça démarre en Toutou. Et là ce soir j'avais la flemme de m'y remettre. Pour ta demande je regarderai demain... j'ai testé la carte pcmcia éthernet, mais pas les wifi. Je verrai demain.
J'ai jeté un œil là : http://www.leboncoin.fr/annonces/offres … cmcia+wifi
Dernière modification par Ayral (Le 23/12/2012, à 00:47)
Hors ligne
Ayral
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
Pour l'instant aucune de mes 2 cartes wifi sur mon toutou que je n'arrive pas encore à installer en dur.
Hors ligne
DJ Raging-Bull
Re : Carte PCMCIA WiFi non détecté sur ThinkPad 600X
C'est pas gagné !
Desktop: Intel Core i7 2600K - 16 Go DDR3 - GeForce GTX 590 - Asus P8P67 - Sound Blaster X-Fi Titanium 7.1 - SSD 64 Go
Laptop: Lenovo ThinkPad X201 - Intel Core i5 520M - 4 Go DDR3 - Kingstone SSD 60 Go
HTPC: Intel Core i5 4570S - 4 Go DDR3 - Asus H81M - Asus Xonar DX - Kingstone SSD 60 Go - 2 x WD Caviar Green 4To
OS: GNU/Linux Ubuntu 14.04 LTS Trusty Tahr & Microsoft Windows 7 Édition Intégrale
Hors ligne
|
I'm unable to access an edit token for my media wiki site. Using the following code, I should be able to use the simpleMediWiki site to login, then request an edit token, then finally stage an edit. Unfortunately I'm getting an error that the 'edit' parameter is an unrecognized parameter:
{'error': {'info': "Unrecognized value for parameter 'action': token", 'code': 'unknown_action'}}
Here's my code:
from simplemediawiki import MediaWiki
wiki = MediaWiki('http://domain/api.php')
#the following logs me in
loginData = wiki.call({'action':'login', 'lgname':'myUserName','lgpassword':'myPassword'})
#The following is a resubmission of the login token
personalLoginData = wiki.call({'action':'login', 'lgname':'myUserName','lgpassword':'myPassword','lgtoken': loginData['login']['token']})
#the following is THE TROUBLESOME REQUEST FOR AN EDIT TOKEN
editTokenDict = wiki.call({'action':'tokens','type':'edit'})
#the following is an edit
results = wiki.call({'action':'edit','title':"ArticleTitle",'text':"This is the page",'tokens':editTokenDict['login']['token']})
|
I'd like to get a few opinions on the best way to replace a substring of a string with some other text. Here's an example:
I have a string, a, which could be something like "Hello my name is $name". I also have another string, b, which I want to insert into string a in the place of its substring '$name'.
I assume it would be easiest if the replaceable variable is indicated some way. I used a dollar sign, but it could be a string between curly braces or whatever you feel would work best.
Solution:Here's how I decided to do it:
from string import Template
message = 'You replied to $percentageReplied of your message. ' +
'You earned $moneyMade.'
template = Template(message)
print template.safe_substitute(
percentageReplied = '15%',
moneyMade = '$20')
|
I'm trying to create a python server that will serve calls from outer source through sockets. So I've skimmed through the docs and copied this code, I can connect but no sent data is shown. What am I doing wrong ?
import SocketServer
class MyUDPHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.data = self.rfile.readline().strip()
print "%s wrote:" % self.client_address[0]
print self.data
self.wfile.write(self.data.upper())
if __name__ == "__main__":
HOST, PORT = "localhost", 80
try:
server = SocketServer.UDPServer((HOST, PORT), MyUDPHandler)
print("working")
server.serve_forever()
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind((socket.gethostname(), 80))
serversocket.listen(5)
except:
print("not working")
while True:
(clientsocket, address) = serversocket.accept()
ct = client_thread(clientsocket)
ct.run()
class mysocket:
def __init__(self, sock=None):
if sock is None:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
else:
self.sock = sock
def connect(self, host, port):
self.sock.connect((host, port))
def mysend(self, msg):
totalsent = 0
while totalsent < MSGLEN:
sent = self.sock.send(msg[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent = totalsent + sent
def myreceive(self):
msg = ''
while len(msg) < MSGLEN:
chunk = self.sock.recv(MSGLEN-len(msg))
if chunk == '':
raise RuntimeError("socket connection broken")
msg = msg + chunk
return msg
And moreover, if this code is proper - how to use it ? I'm just setting the server now with python server.py which creates an instance of MyUdpHandler but what next ?
|
I'm trying to get the number of followers of each follower for a specific account (with the goal of finding the most influencial followers). I'm using Tweepy in Python but I am running into the API rate limits and I can only get the number of followers for 5 followers before I am cut off. The account I'm looking at has about 2000 followers. Is there any way to get around this?
my code snippet is
ids=api.followers_ids(account_name)
for id in ids:
more=api.followers_ids(id)
print len(more)
Thanks
|
bisk8
Re : [HOW TO] adesklets : configuration des desklets
Bonjour,
Alors la je commence a désespérer avec le adesklets: volume.py
C'est le seul que je n'arrive pas a faire marcher, tous les autres sont ok.
Lorsque je le lance en test
python ./chemin/du/script/volume.py --nautilus
puis touche t il se lance, n'est pas affiché a l'ecran, apparait bien dans les processus.
c'est seulement quand je le tue avec Ctrl+C qu'il me renvoie l'erreur suivante:
Now testing...
============================================================
If you do not see anything (or just an initial flicker
in the top left corner of your screen), try `--help',
and see the FAQ: `info adesklets'.
============================================================
^CTraceback (most recent call last):
File "./.fvwm/aDesklets/marche_pas/volume-0.0.8/volume.py", line 588, in <module>
Events(dirname(__file__)).pause()
File "./.fvwm/aDesklets/marche_pas/volume-0.0.8/volume.py", line 45, in __init__
self.id = adesklets.get_id()
File "/usr/lib/python2.5/site-packages/adesklets/commands.py", line 94, in get_id
return comm.out()
File "/usr/lib/python2.5/site-packages/adesklets/commands_handler.py", line 93, in out
output=self.__comm.out(.01)
File "/usr/lib/python2.5/site-packages/adesklets/communicator.py", line 87, in out
rd, wr, ex = select.select([self.__stdout],[],[],delay)
KeyboardInterrupt
Exception exceptions.IOError: (32, 'Broken pipe') in <bound method Events.__del__ of <__main__.Events instance at 0x90d58ac>> ignored
Quelqu'un aurait il une piste de réfexion, car la je suis un peu perdu depuis 2 jours.
Merci bien d'avance, @ bientôt.
Hors ligne
fonfonsd
Re : [HOW TO] adesklets : configuration des desklets
bonjour lors de l install quand je tape
./configure
j'ai un message me disant
fontes@fontes-portable:~/adesklets-0.6.1$ sudo ./configure
configure: error: cannot find sources (src/main.c) in . or ..
fontes@fontes-portable:~/adesklets-0.6.1$
merci à tous
Hors ligne
labo16
Re : [HOW TO] adesklets : configuration des desklets
Pourquoi passer par ./configure ?
Adesklets est directement dans les dépots (via synaptic) depuis le passage à Hardy
Hors ligne
fonfonsd
Re : [HOW TO] adesklets : configuration des desklets
oups j'avais pas vu
Hors ligne
|
for ( boldParam in [para1, para2, para2, para4, para5] ) {
if(/* boldParam exists in params */)
ilike(boldParam,'%' + params.boldParam + '%')
}
}
I would like to write something like above. I'm trying to avoid the following multiple if statements:
if (params.para1)
ilike('para1','%' + params.para1+ '%')
if (params.para2)
ilike('para2','%' +params.para2+ '%')
if (params.para3)
ilike('para3','%' + params.para3+ '%')
|
Hello World,
This year's installment of the GNU Hacker's Meeting is just a month away.
When: Thursday July 19th until Sunday July 22th
Where: Düsseldorf
As in previous years, the fun starts on Thursday with an informal hacking / social evening followed by talks (as well as more hacking) Friday through Sunday.
If you are planning on coming, we request that you register soon by emailing ghm-registration@gnu.org as we have a limit amount of space.
Note: We are also still accepting presentation proposals.
See you soon!
I was recently hunting down a slightly annoying usability bug in Khweeteur, a Twitter / identi.ca client: Khweeteur can notify the user when there are new status updates, however, it wasn't overlaying the notification window on the application window, like the email client does. I spent some time investigating the problem: the fix is easy, but non-obvious, so I'm recording it here.
A notification window overlays the window whose WM_CLASSproperty matches the specified desktop entry (and is correctlyconfigured in/etc/hildon-desktop/notification-groups.conf). Khweeteur was doingthe following:
import dbus
bus = dbus.SystemBus()
notify = bus.get_object('org.freedesktop.Notifications',
'/org/freedesktop/Notifications')
iface = dbus.Interface(notify, 'org.freedesktop.Notifications')
id = 0
msg = 'New tweets'
count = 1
amount = 1
id = iface.Notify(
'khweeteur',
id,
'khweeteur',
msg,
msg,
['default', 'call'],
{
'category': 'khweeteur-new-tweets',
'desktop-entry': 'khweeteur',
'dbus-callback-default'
: 'net.khertan.khweeteur /net/khertan/khweeteur net.khertan.khweeteur show_now',
'count': count,
'amount': count,
},
-1,
)
This means that the notification will overlay the window whoseWM_CLASS property is khweeteur. The next step was to figure outwhether Khweeteur's WM_CLASS property was indeed set to khweeteur:
$ xwininfo -root -all | grep Khweeteur
0x3e0000d "Khweeteur: Home": ("__init__.py" "__init__.py") 800x424+0+56 +0+56
^ Window id ^ WM_CLASS (class, instance)
$ xprop -id 0x3e0000d | grep WM_CLASS
WM_CLASS(STRING) = "__init__.py", "__init__.py"
Ouch! It appears that a program's WM_CLASS is set to the name of its "binary". In this case, /usr/bin/khweeteur was just a dispatcher that executes the right command depending on the arguments. When starting the frontend, it was running a Python interpreter. Adjusting the dispatcher to not exec fixed the problem:
$ xwininfo -root -all | grep Khweeteur
0x3e00014 "khweeteur": ("khweeteur" "Khweeteur") 400x192+0+0 +0+0
0x3e0000d "Khweeteur: Home": ("khweeteur" "Khweeteur") 800x424+0+56 +0+56
import cProfile
cProfile.run('foo()')
On both Debian and Maemo, this results in an import error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/cProfile.py", line 36, in run
result = prof.print_stats(sort)
File "/usr/lib/python2.6/cProfile.py", line 80, in print_stats
import pstats
ImportError: No module named pstats
To my eyes, this looks like I need to install some package. This isindeed the case: the python-profiler packageprovides the pstats module. Unfortunately, python-profiler is notfree. There's a depressing back story involving ancientcode and missing rights holders.
If you're on Debian, you can just install the python-profilerpackage. Alas, the package does not appear to be compiled for Maemo.
Happily, kernprof works around this and is easy to use:
# wget http://packages.python.org/line_profiler/kernprof.py
# python -m kernprof /usr/bin/gpodder
Kernprof saves the statistics in the file program.prof in thecurrent directory (in this case, it saves the data in gpodder.prof).
To analyize the data, you'll need to copy the file to a system thathas python-profiler installed. Then run:
# python -m pstats gpodder.prof
Welcome to the profile statistics browser.
% sort time
% stats 10
Tue Nov 1 13:09:54 2011 gpodder.prof
105542 function calls (101494 primitive calls) in 117.449 CPU seconds
Ordered by: internal time
List reduced from 1138 to 10 due to restriction <10>
ncalls tottime percall cumtime percall filename:lineno(function)
1 57.458 57.458 69.012 69.012 {exec_}
1 16.052 16.052 26.417 26.417 /usr/lib/python2.5/site-packages/gpodder/qmlui/__init__.py:405(__init__)
1 8.591 8.591 13.790 13.790 /usr/lib/python2.5/site-packages/gpodder/qmlui/__init__.py:24(<module>)
60 7.041 0.117 7.041 0.117 {method 'send_message_with_reply_and_block' of '_dbus_bindings.Connection' objects}
3 6.357 2.119 7.469 2.490 {method 'reset' of 'PySide.QtCore.QAbstractItemModel' objects}
36 2.636 0.073 2.636 0.073 {method 'execute' of 'sqlite3.Cursor' objects}
1 2.283 2.283 2.284 2.284 {method 'setSource' of 'PySide.QtDeclarative.QDeclarativeView' objects}
1 1.848 1.848 1.848 1.848 /usr/lib/python2.5/site-packages/PySide/private.py:1(<module>)
2 1.789 0.895 1.789 0.895 {posix.listdir}
1 0.765 0.765 4.234 4.234 /usr/lib/python2.5/site-packages/gpodder/__init__.py:20(<module>)
The statistics browser is relatively easy to use (at least for thesimple things I've wanted to see so far). Help is available onlineusing its help command.
Khweeteur is a great twitter and identi.ca client for Maemo. One feature I particularly like is its support for queuing of status updates, which is useful when connectivity is poor or non-existent (which, for me, is typically when something tweet-worthy happens). It also supports multiple accounts, e.g., a twitter account and an identi.ca account.
Khwetteur can automatically download updates and notify you whensomething happens. Enabling this option causes Khwetteur toperiodically perform updates whenever there is an internetconnection---whether it is a WiFi connection or via cellular. Thisis unfortunate for those, who like me, have limited data transferbudgets.
Deciding when to transfer updates is exactly what Woodchuck was designed for, and recently, I added Woodchuck support to Khweeteur. Now, if Woodchuck is found, Khweeteur will rely on it to determine when to schedule updates (of course, you can still manually force an update whenever you like!).
While modifying the code, I also made a few bug fixes and some small enhancements. Two improvements that, I think, are noteworthy are: displaying unread messages in a different color from read messages, and indicating when the last update attempt occured.
You can install the Woodchuck-enabled version of Khweeteur on yourN900 using this installer. You'll also need toinstall the Woodchuck server, to profit from theWoodchuck support. Hopefully, the version in Maemo extras will beupdated soon!
Other Woodchuck-enabled software for the N900 include:
If you are interested in adding Woodchuck support to your software, let me know either via email or join #woodchuck on irc.freenode.net.
I'll be at the N9 Hackathon this weekend in Vienna. Sunday morning (October 9th) at 10am, I'll give a presentation about Woodchuck. I'll talk a bit about Woodchuck's motivation and a fair amount about Woodchuck's architecture as well as what we hope to learn from the user study and how we planning on using it to evaluate different scheduling algorithms. If you are around, you should come by!
I've finished an initial port of Woodchuck to Harmattan. To get it,you need to manually add the source repository: Harmattan'sapplication manager does not support .install files. Add thefollowing to /etc/apt/sources.list.d/hssl.list:
deb http://hssl.cs.jhu.edu/~neal/woodchuck harmattan harmattan
Then, run apt-get update.
The following packages are available: the Woodchuck server (package: murmeltier), the Python bindings (package: pywoodchuck) and the Glib-based C bindings (libgwoodchuck and libgwoodchuck-dev).
smart-storage-logger, the software for the user behavior study, has not yet been ported: I'm still trying to figure aegis out.
At the recent GNU Hackers Meeting, I gave a talk about Woodchuck. (I'll publish another post when the video is made available.) The talk resulted in a lot of great feedback including a question from Arne Babenhauserheide whether Woodchuck could be used to automatically synchronize git or mercurial repositories.
I hadn't considered using Woodchuck to synchronize version control respoitories, but it is a fitting application of Woodchuck: some data is periodically transferred over the network in the background. I immediately saw two major applications in my own life: a means to periodically push changes to a personal back up repository; and automatically fetching change sets so that when I don't have network connectivity, I still have a recent version of a repository that I'm tracking.
I decided to implement Arne's suggestion. It's called VCS Sync. To configure it, you create a file in your home directory called .vcssync. The file is JSON-based with the extension that lines starting with // are accepted as comments. The file has the following shape:
{
"directory1": [ { action1 }, { action2 }, ..., { actionM } ],
"directory2": [ { action1 }, { action2 } ],
...
"directoryN": [ { action1 } ],
}
That is, there is a top-level hash mapping directories to arrays of actions. An action consists of four possible arguments: 'sync' (either 'push' or 'pull'), 'remote' (the remote repository, default: origin), 'refs' (the set of branches, e.g., +master:master, default: 'master') and 'freshness' (how often to perform the action, in hours).
Here's an example configuration file:
// To register changes, run 'vcssync -r'.
{
"~/src/woodchuck": [
// Pull daily.
{"sync": "pull", "remote": "origin", "freshness": 24},
// Backup every tracked branch every few hours.
{"sync": "push", "remote": "backups", "refs": "+*:*", "freshness": 3}
],
"~/src/gpodder": [
// Pull every few days.
{"sync": "pull", "remote": "origin", "freshness": 96}
]
}
VCS Sync automatically figures out the repository format and invokes the right tool (currently only git and mercurial are supported; patches for other VCSes are welcome).
After you install the configuration file, you need to run 'vcssync -r' to inform Woodchuck of any changes to the configuration file.
You can use this on the N900, however, because this is a programmer's tool and you need to edit a file to use it, it is not installable using the hildon application manager. Instead, you'll need to run 'apt-get install vcssync' from the command line (the package is in the same repository as the Woodchuck server). If you encounter problems, consult $HOME/.vcssync.log.
I also use this script on my laptop, which runs Debian. Building packages for Debian is easy, just check out woodchuck and use dpkg-buildpackage:
git clone http://hssl.cs.jhu.edu/~neal/woodchuck.git
cd woodchuck
dpkg-buildpackage -us -uc -rfakeroot
This (currently) generates eight packages. In addition to vcssync, you'll also need to install murmeltier (my Woodchuck implmentation), and pywoodchuck (a Python interface to Woodchuck).
One of the arguments for [Woodchuck][http://hssl.cs.jhu.edu/~neal/woodchuck] is that it can save energy. In this post, I want to examine that claim a bit more quantitatively.
To determine whether or not Woodchuck can save energy, we first need to know approximately how much energy the activities we are interested in consume. To measure this, I charged my N900 until the battery was full, then I started some activity and let it run until the device turned off. Every five minutes, I queried the battery's state (voltage, mAh and whether the device was being charged) and wrote it to an SQLite database. The activities that I measured were: streaming or playing an mp3 file at various encodings, downloading over WiFi at different speeds, having the LCD on, and idling. Some of the results are summarized in the table below. Keep in mind that a full charge has approximately 18 kWs (= 5 Wh).
Data Acquisition Activity Watts Energy Consumed Relative to Idle
3G Play 56 Kb/s stream 1.00 12.5
Edge Play 56 Kb/s stream 0.96 12.0
WiFi Play 56 Kb/s stream 0.75 9.3
Flash Play 56 Kb/s files 0.28 3.5
Flash Play 128 Kb/s files 0.27 3.4
Flash Play 320 Kb/s files 0.32 4.0
WiFi Download at 4.7 Mb/s 1.23 15.4
WiFi Download at 1.0 Mb/s 0.91 11.4
WiFi Download at 256 Kb/s 0.76 9.5
None Idle, LCD on 0.27 3.4
None Idle 0.08 1
The first thing to notice is that streaming over a network connectionis expensive: streaming over 3G consumes 20% of the N900's batterycapacity per hour. Although it is possible to save a bit of energyby using Edge or WiFi, the improvement is marginal. Playing backaudio data saved on flash requires significantly less energy---just30% as much. In other words, if all you do is use your N900 to listento audio, listening to audio data saved on flash will allow you tolisten to more than 3 times as much audio on a single battery chargethan if you were to stream that data.
It is not always possible to ensure that the data is saved on flash. In this case, the best approach is to download the data as fast as possible: although downloading over WiFi at 4.7 Mb/s (the maximum sustainable throughput I observed) requires more energy than downloading at, say, 256 KB/s, the required energy per bit is significantly lower.
To put these values in perspective, I measured how much energy the system consumes at idle and with the LCD on. I think it is not surprising that having the LCD on consumes significantly more power than not, however, I was surprised that the network uses 3 times as much energy as having the LCD on.
What do these values mean for Woodchuck? Woodchuck tries to schedule downloads to occur when conditions are good. In terms of energy, conditions are best when the device is connected to the mains. I charge my N900 about every two days. Only updating my subscriptions every two days is not often enough: I don't want the news from a day and a half ago; many blogs that I read are updated daily; and, my calendaring information should be synchronized constantly. In this case, fetching the data as fast as possible over WiFi when the signal is strong is the next best approach.
To understand the possible savings, consider the case where 8 hours ofaudio, about 200~MB of data, are prefetched over WiFi. At 4.7~MB/s,this requires 420~Ws (2.5% of the battery's capacity). If a userlistens to 30 minutes of audio (25~MB) on the commute home, only anadditional 480~Ws (2.7% of the battery's capacity) are required.Streaming 30 minutes of audio over 3G requires 1800~Ws, twice theamount of energy to prefetch 8 times the data and listen to the sameaudio. Thus, even with a cache hit rate of 12%, prefetching usesjust half of the energy needed to stream.
As part of some Woodchuck-related work, I've done a fair amount of Python programming on Maemo. Python, being an interpreted language, runs the source code; there is no need to compile it to some binary representation as is the case with C. This is a great convenience when developing for a device such as the N900: there is no need to compile the code and copy the resulting binaries; I just edit the code on the device and run it. The trade-off is that I need to edit the files directly on the device: but, I want my Emacs (qemacs is not enough!), git and the regular GNU tools. It turns out that I was able to get pretty close.
Using Emacs to edit files on the N900 does not necessarily mean running Emacs on the N900: Emacs' tramp mode makes it possible to edit files on another system! I had read about tramp mode in the past, but most systems I use already have Emacs installed, so I never bothered to investigate it further (or at least, it was easier to install Emacs than learn about tramp mode). Using tramp mode to edit a file is embarrassingly easy: you just prefix the login information to the filename that you want to edit. In my case, I add '/user@n900:' to access my home directory on my N900. (To avoid constantly typing in your password, you'll want to add an ssh key to your $HOME/.ssh/authorized_keys file on your device).
Tramp mode is not just for editing: many Emacs functions support tramp. For instance, tab completion knows about tramp, as does dired. Even grep-find is tramp enabled: tramp knows how to run grep and find on the remote machine!
grep-find assumes relatively feature-complete tools. By default, the N900 includes busybox's grep and find, which have rather limited functionality. Happily, Thomas Tanner has packaged many of the GNU tools for Maemo and they are just an apt-get install away. (The packages you need are: grep-gnu, sed-gnu, findutils-gnu, coreutils-gnu, and diffutils-gnu.)
Installing Thomas's packages does not immediately make grep-find work: the packages do not replace the busybox tools; the binaries are installed in /usr/bin/gnu, which is not in the user's default path. To fix this problem, I first installed bash and edited my .bashrc file to read:
PATH=/usr/bin/gnu:$PATH export PATH
And my .bash_profile to read:
. $HOME/.bashrc
I also changed the user's default shell to bash using chsh. Now when I run grep at the command line, I get GNU grep, not Busybox's.
This is still not enough to get grep-find to work: by default, tramp does not respect the PATH variable on the remote machine. (See for more details.) This behavior can be overridden by adding the following to your .emacs file:
(require 'tramp) (add-to-list 'tramp-remote-path 'tramp-own-remote-path)
Now, Emacs's grep-find function works.
The last piece of the puzzle is working with git repositories. My primary interface to git is via Magit. Unfortunately, Magit v0.7, which is distributed with Debian Squeeze, does not fully support tramp mode. Magit v1.0, however, does and it is available in Debian testing. (Note: if you are a Magit v0.7 user and you customized magit-diff-options, you'll need to change the value from a string to a list, e.g., '(setq magit-diff-options '("--patience"))')
This set up is great and I'm happy. As a final tweak, I tend to use USB networking, because access over WiFi has a fair amount of latency.
The following text is from the introduction of the HOWTO I've written explaining how to modify a program to use Woodchuck. The focus is on the Python interface, but it should be helpful to anyone who wants to modify an application to use Woodchuck. This document, unlike the detailed documentation, should be a bit easier to digest if you are just getting started with Woodchuck. If questions still remain, feel free to email me or ask for help on #woodchuck on irc.freenode.net.
Introduction
Woodchuck is a framework for scheduling the transmission of delay tolerant data, such as RSS feeds, email and software updates. Woodchuck aims to maximize data availability (the probability that the data the user wants is accessible) while minimizing the incurred costs (in particular, data transfer charges and battery energy consumed). By scheduling data transfers when conditions are good, Woodchuck ensures that data subscriptions are up to date while saving battery power, reducing the impact of data caps and hiding spotty network coverage.
At the core of Woodchuck is a daemon. This centralized service reduces redundant work and facilitates coordination of shared resources. Redundant work is reduced because only a single entity needs to monitor network connectivity and system activity. Further, because the daemon starts applications when they should perform a transfer, applications do not need to wait in the background to perform automatic updates thereby freeing system resources. With respect to the coordination of shared resources: the cellular data transmission budget and the space allocated for prefetched data need to be allocated among the various programs.
Applications need to be modified to benefit from Woodchuck. Woodchuck needs to know about the streams that the user has subscribed to and the objects which they contain as well as related information such as an object's publication time. Woodchuck also needs to be able to trigger data transfers. Finally, Woodchuck's scheduler benefits from knowing when the user accesses objects. In my experience, the changes required are relatively non-invasive and not difficult. This largely depends, however, on the structure of the application.
...
I designed Woodchuck's API to be easy to use. A major goal was to allow applications to progressively add support for Woodchuck: it should be possible to add minimal Woodchuck support and gain some benefit of the services that Woodchuck offers; more complete support results in higher-quality service.
To support Woodchuck, an application needs to do three things:
register streams and objects;
process upcalls: update a stream, transfer an object, and, optionally, delete an object's files; and,
send feedback: report stream updates, object downloads and object use.
The rest of this document is written as a tutorial that assumes that you are using PyWoodchuck, the Python interface to Woodchuck. If you are using libgwoodchuck, a C interface, or the low-level DBus interface, this document is still a good starting point for understanding what your application needs to do.
|
I have two large figures that I'd like to put on an extra page, meaning there should be no text on that page, only the figures.
Bla bla.
\begin{figure}[t] ... \end{figure}
\begin{figure}[b] ... \end{figure}
Lorem ipsum.
I'd like that to come out as:
Bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla.
Lorem ipsum bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
-- pagebreak
Figure 1 (at the top)
Figure 2 (at the bottom)
-- pagebreak
bla bla bla bla bla bla bla bla.
I haven't found anything on Google - inserting pagebreaks manually may work but won't make the text continue (the "Lorem Ipsum" part would be after the figures, even when there is some space left on the page before).
Any idea how this can be done?
|
Part of me bristles when I hear someone say “Hypermedia API.” I worry it’ll become the sort of phrase, like “semantic web,” that means different things to different people, and ends up covering such a breadth of ideas that it’s impossible to argue for or against without specifying which flavor you’re addressing.
Nonetheless, when I see DHH arguing against Hypermedia APIs, I worry that we’re in serious “die, heretic scum” territory. I’m no expert, but the difference between REST and Hypermedia really doesn’t seem that large, especially in a universe where SOAP is a thing. Moreover, Rails deserves a lot of credit for demonstrating that web APIs could work within HTTP rather than try to reinvent it. Out of the box, Rails checks three out of four of Steve Klabnik’s boxes, and all we’re arguing over is that last one.
Anyway, what prompted this was a post by Adam Keys, my former Gowalla colleague. I agree with most of what he’s saying here. My gut reaction to Hypermedia APIs is this:
Roughly 90% of it is sensible stuff that I’ve already seen in the wild and which is demonstrably a Good Idea. The remaining 10% is the stuff that (at this early stage) seems non-intuitive, or overkill, or YAGNI, or whatever the word is for a thing that you think is awesome but which your users won’t give a damn about.
In fact, that last thing is my chiefest concern. The final 10% seems to require nontrival re-education on the part of consumers. I don’t mean they’d have to be brainwashed; I just mean that some of the stated benefits only come to pass if the consumers buy in, and in my experience an API consumer wants to do the simplest thing that could possibly work. I believe this is what Adam is getting at in his follow-up post.
The Gowalla API
Adam’s opinions on hypermedia are informed, in part, by his time at Gowalla, and so are mine. Before I convince myself it’s a bad idea, let’s take a retrospective look at the Gowalla API (which, by the way, was started in 2008–2009) and see how it measures up against a hypermedia rubric.
Things we did right
Addressability
URLs identified resources. A spot had the same URL whether you were requesting an HTML representation in a web browser or a JSON representation from curl. If a form could create a resource by POSTing some multipart form data to a URL, odds are a client could create the same resource by POSTing some JSON to that same URL.
This was less like a knowing philosophical decision and more like a thing that Rails just does by default. Until rather late in the game, if you were using the Gowalla API, your requests were hitting the same controllers and actions as web users’ requests. (Eventually we decided to move API stuff into dedicated controllers for maintainability’s sake, but that tilting-at-a-windmill saga will have to be told on another day.)
Content negotiation
As implied above, the API was driven by content negotiation. If you asked for HTML, you got a browser representation; if you asked for JSON, you got a pure data representation. (If you asked for XML, we pretended we didn’t hear you.)
HATEOAS
We endeavored to practice what Steve calls HATEOAS: Hypertext As The Engine Of Application State. To over-simplify: a response should publicize the URLs of any resources that are reasonably related to it.
(By the way: I do not come down on one side or the other here. If there’s a natural workflow to your API, as there was for Gowalla’s, it obviously makes sense to publicize related resources rather than force a user to memorize your URL-making conventions. On the other hand, odds are high that your API consumers will make assumptions about your URL schemes anyway. So I’m not sure what HATEOAS gets you in the real world, except for the ability to say “I told you so.” Which, admittedly, is underrated.)
But back to Gowalla. If you were authenticated with Gowalla and requested the resource for your own user profile, this is a snapshot of what you saw:
// GET /users/savetheclocktower
{
"stamps_count": 14,
"stamps_url": "/users/savetheclocktower/stamps",
"pins_count": 11,
"pins_url": "/users/savetheclocktower/pins",
"top_spots_url": "/users/savetheclocktower/top_spots",
"friends_count": 44,
"friends_url": "/users/savetheclocktower/friends",
// ...
"visited_spots_urls_url": "/users/savetheclocktower/visited_spots_urls"
}
Nearly every meaningful kind of resource is discoverable by starting at this response and navigating through the various URLs. (Of course, not every API use case would start with loading a specific user’s profile. For instance, those that were interested mainly in the place database would probably start with the result set of spots from a geographical search.) Though the URL conventions were simple enough that a client could build URLs on their own, we tried to make it so that building URLs was harder than just using the URLs that we’d given you in the response. This gave us a theoretical freedom to change URLs in the future (not that we’d ever want to do so, we thought).
This style — in which everything ending in url points to another resource — is just one version of what HAL or Collection+JSON are trying to formalize. It’s a pattern that worked very well for us. It made our API very “surfable,” and though I doubt we had machine discovery in mind when we were doing it, it did mean that the API explorer I built was a lot of fun to use — anything that looked like a URL was hyperlinked, and clicking on it would load that new resource in the explorer. We updated the URL hash, too, so the back button would return you to the previous resource.
Crucial to all of this is that the API used a resource’s URL as its unique identifier, rather than a raw ID. This is the part that Rails didn’t give you out of the box, so credit to Scott for designing it this way.
What we could’ve done better
API Versioning
Rather than version our API with MIME types, we used a separate X-Gowalla-API-Version header, defaulting to the most recent version if a JSON-requesting client omitted this header.
I don’t necessarily think that our approach was wrong — only that if we’d made people opt into a particular MIME-type, rather than just the generic application/json, and if the MIME-type was tied to a particular API version, we likely would’ve had fewer incidents where changes we made inadvertently broke third-party tools.
Discoverability
When I said that every resource was discoverable, I was lying. Nearly all GET requests were discoverable. Anything that required a POST (and any GET that involved query parameters) wasn’t documented within the API itself, so you’d have to dig into the API documentation to figure out exactly how they worked. If we were doing it over again now, it’s possible that we’d toss in query templates or something like it, but I suspect we wouldn’t have bothered.
Sub-resources
We never really figured out the best way to do sub-resources. Consider a checkin, which referenced one user and one spot:
// GET /checkins/131072
{
"created_at": "2010-12-21T01:03:15-06:00",
"message": "I am eating here under protest.",
"url": "/checkins/131072",
"user": {
"first_name": "Andrew",
"last_name": "Dupont",
"url": "/users/savetheclocktower",
"image_url": "http://some.crazy.cdn.url/jklyjksljkrewus.jpg",
"hometown": "Austin, TX",
"photos_url": "/users/savetheclocktower/photos"
},
"spot": {
"name": "Red Lobster",
"url": "/spots/15555",
"image_url": "http://some.crazy.cdn.url/jjkpwopresas.jpg",
"lat": -90.105324,
"lng": 30.448674,
"address": {
"street_address": "123 Fake St.",
"locality": "New Orleans",
"region": "LA",
"iso3166": "US"
}
}
// ...
}
Now, we don’t want to dump the whole user resource into our response, but neither do we want to force someone to follow a URL to learn anything about the person who checked in. So we chose the middle ground: include a “concise” representation of the resource. In this case, the properties we show from the sub-resources are the things we’d need to know if we were rendering the checkin in a list; with this response, I can render the sentence “Andrew checked in at Red Lobster,” along with a user avatar and a spot icon, without having to make any other requests.
This eventually got crazy, though, because a sub-resource could plausibly have a half-dozen representations of varying lengths, each of which could be justified from context. For instance, if you requested a user’s checkins, you’d get a list of these:
// GET /users/savetheclocktower/checkins
{
"checkins": [
{
"created_at": "2010-12-21T01:03:15-06:00",
"message": "I am eating here under protest.",
"url": "/checkins/131072",
"user": {
"first_name": "Andrew",
"last_name": "Dupont",
"url": "/users/savetheclocktower"
},
"spot": {
"name": "Red Lobster",
"url": "/spots/15555",
"image_url": "http://some.crazy.cdn.url/jjkpwopresas.jpg",
"lat": -90.105324,
"lng": 30.448674,
"address": {
"street_address": "123 Fake St.",
"locality": "New Orleans",
"region": "LA",
"iso3166": "US"
}
}
},
{
"created_at": "2010-12-21T01:02:44-06:00",
"message": "I am in need of fuel for my car.",
"url": "/checkins/130808",
"user": {
"first_name": "Andrew",
"last_name": "Dupont",
"url": "/users/savetheclocktower"
},
"spot": {
"name": "Chevron",
"url": "/spots/91142",
"image_url": "http://some.crazy.cdn.url/oahkhjs.jpg",
"lat": -90.105416,
"lng": 30.444994,
"address": {
"street_address": "919 Fake St.",
"locality": "New Orleans",
"region": "LA",
"iso3166": "US"
}
}
},
// ...
]
}
Here, the spot resource is using the same representation that it did for an individual checkin, but the user resource is much more sparse. Why? Because (a) in this response, all the checkins are guaranteed to be from the same user, and the redundancy bothered the hell out of me; (b) chances are you followed this URL from the response for /users/savetheclocktower and thus already have the full representation of this user.
If you were to ask for a single spot’s checkins, the situation would be reversed — the user representation would be the same as for a single checkin, but the spot representation would be as minimal as possible.
We managed this complexity as best we could. First we added a to_public_json method on models — so named because it wasn’t trying to be exhaustive like to_json; it merely wanted to expose properties that would be relevant for a public API. It optionally took a symbol argument that would specify a named represenation, much like DateTime#to_formatted_s lets you choose between date formats. When even that got too complicated, Brad Fults wrote an awesome thing called Boxer that centralized all this logic in a place that was neither a controller nor a model.
I’d always wished for YAML-style anchors and references in JSON, but I didn’t want to do anything crazy with our JSON responses that put an extra burden on API consumers. Still, if I were to do it over again, I’d probably do something like this:
// (hypothetically)
// GET /users/savetheclocktower/checkins
{
"includes": {
"users": {
"savetheclocktower": {
"first_name": "Andrew",
"last_name": "Dupont",
"url": "/users/savetheclocktower",
"image_url": "http://some.crazy.cdn.url/jklyjksljkrewus.jpg",
"hometown": "Austin, TX",
"photos_url": "/users/savetheclocktower/photos"
}
},
"spots": {
"15555": {
"name": "Red Lobster",
"url": "/spots/15555",
"image_url": "http://some.crazy.cdn.url/jjkpwopresas.jpg",
"lat": -90.105324,
"lng": 30.448674,
"address": {
"street_address": "123 Fake St.",
"locality": "New Orleans",
"region": "LA",
"iso3166": "US"
}
},
"91142": {
"name": "Chevron",
"url": "/spots/91142",
"image_url": "http://some.crazy.cdn.url/oahkhjs.jpg",
"lat": -90.105416,
"lng": 30.444994,
"address": {
"street_address": "919 Fake St.",
"locality": "New Orleans",
"region": "LA",
"iso3166": "US"
}
}
}
},
"checkins": [
{
"created_at": "2010-12-21T01:03:15-06:00",
"message": "I am eating here under protest.",
"url": "/checkins/131072",
"user": { "include": "/users/savetheclocktower" },
"spot": { "include": "/spots/15555" }
},
{
"created_at": "2010-12-21T01:02:44-06:00",
"message": "I am in need of fuel for my car.",
"url": "/checkins/130808",
"user": { "include": "/users/savetheclocktower" },
"spot": { "include": "/spots/91142" }
},
// ...
]
}
All sub-resources would get put into a hierarchical repository at the root of the response, and the structure of that repository would mirror the URL structure, so that when you saw an object with an “include” property, you could try to look it up locally and then fall back to another HTTP request if necessary. This is probably overkill, but dammit, if I’m going to introduce an extra-language convention into JSON, I’m going to give it some style.
The Verdict
On reflection, I think we did pretty well, especially considering that these decisions were made incrementally over the course of two years. I can think of only one instance when the API design painted us into a corner, and that’s the story I’ll save for next time.
|
The following is a Bresenham-like algorithm that draws 4-connected lines. The code is in Python but I suppose can be understood easily even if you don't know the language.
def line(x0, y0, x1, y1, color):
dx = abs(x1 - x0) # distance to travel in X
dy = abs(y1 - y0) # distance to travel in Y
if x0 < x1:
ix = 1 # x will increase at each step
else:
ix = -1 # x will decrease at each step
if y0 < y1:
iy = 1 # y will increase at each step
else:
iy = -1 # y will decrease at each step
e = 0 # Current error
for i in range(dx + dy):
draw_pixel(x0, y0, color)
e1 = e + dy
e2 = e - dx
if abs(e1) < abs(e2):
# Error will be smaller moving on X
x0 += ix
e = e1
else:
# Error will be smaller moving on Y
y0 += iy
e = e2
The idea is that to draw a line you should increment X and Y with a ratio that matches DX/DY of the theoretic line. To do this I start with an error variable e initialized to 0 (we're on the line) and at each step I check if the error is lower if I only increment X or if I only increment Y (Bresenham check is to choose between changing only X or both X and Y).
The naive version for doing this check would be adding 1/dy or 1/dx, but multiplying all increments by dx*dy allows using only integer values and that improves both speed and accuracy and also avoids the need of special cases for dx==0 or dy==0 thus simplifying the logic.Of course since we're looking for a proportion error, using a scaled increment doesn't affect the result.
Whatever is the line quadrant the two possibilities for the increment will always have a different sign effect on the error... so my arbitrary choice was to increment the error for an X step and decrement the error for an Y step.
The ix and iy variables are the real directions needed for the line (either +1 or -1) depending on whether the initial coordinates are lower or higher than the final coordinates.
The number of pixels to draw in a 4-connected line is obviously dx+dy, so I just do a loop for that many times to draw the line instead of checking if I got to the end point. Note that this algorithm draws all pixels except the last one; if you want also that final pixel then an extra draw_pixel call should be added after the end of the loop.
An example result of the above implementation can be seen in the following picture
|
The recently rebuilt Favcol presented an surprisingly interesting challenge: how to analyze the images.
Image processing at scale is effectively a solved problem. The algorithms are well optimized, and it's trivial to scale horizontally by adding more hardware to your image processing cluster. Sites like Flickr and Picasa have optimized the process enough to resize images on the fly if needed while serving thousands of requests a second.
Scaling image processing down is a different story. I think everyone I've ever talked to about processing images on a small site has a horror story. The story of Favcol is fairly typical.
The first version of Favcol was a Rails application, and used RMagick to manipulate images in memory. It was a disaster. Memory leaks caused processes to grow until the box crashed hard. Reaping processes helped a little, but the server I was running it on was supposed to be doing other things at the same time, and couldn't really wait 60 seconds to recover.
The next version shelled out to the gm GraphicsMagick command to manipulate files, then read the results back from disk. In theory this should have been slower and more expensive, in practise it was significantly more efficient. If there's one piece of advice I can give to anyone thinking about doing any kind of handling of large images, it's to do the hard work in a seperate process unless you really know what you're doing. And if you think you know what you're doing, do the hard work in a seperate process anyway because you're probably wrong.
Even so reading a few hundred huge files every five minutes was still killing my server. One day Favcol crashed the machine again. The cron job got disabled. The intent was to fix it quickly but kids and work and life got in the way and that never happened.
Eventually I started looking at alternatives. Upgrading my virtual server was more expensive than I'm willing to pay to host something like Favcol. I could make the bills cheaper by bringing up an EC2 instance to batch process images for half an hour each day, but part of the fun of favcol is seeing your photo appear within a few minutes. I looked for online services for image processing, and found many different ways to resize or post-process images and no services to give me an average color. I even briefly considered doing the work on visitor's computers with <canvas>.
Google App Engine kept bubbling up as a potential solution - it's free if you stay below a quota and has a built in image manipulation API. The only problem was that App Engine offers no easy way to get at the raw pixel data for an image that has been processed, which is the only data I needed.
Eventually I realized there is a workaround.
The trick is that PNG files are an easy to read, even from high level scripting languages like Python. So you can use the App Engine Image Manipulation Service to convert an image into a smallish PNG, then read the raw data using a pure python library like pypng:
# go grab the image
result = urllib2.urlopen(url)
# resize to a 20px thumbnail
img = images.Image(result.read())
img.resize(width=20, height=20)
thumbnail = img.execute_transforms(
output_encoding=images.PNG)
# read the thumbnail
r = png.Reader(bytes = thumbnail)
png_w,png_h,pixels,info = r.asDirect()
It's a hack, but it works well enough to process a few thousand images throughout the day without costing me any money.
The full code I use is up on github. It only does basic RGB mean average at the moment, but it should be easy to add other metrics like dominant colour.
I hope it's useful.
■
|
I have homework that I am stuck on. I have gone as far as I can but I am stuck, can someone point me in the right direction.... I am getting stick in making each data row a new object. Normally i would think I could just iterate over the rows, but that will only return last row
Question:
Modify the classFactory.py source code so that the DataRow class returned by the build_row function has another method:
retrieve(self, curs, condition=None)
self is (as usual) the instance whose method is being called, curs is a database cursor on an existing database connection, and condition (if present) is a string of condition(s) which must be true of all received rows.
The retrieve method should be a generator, yielding successive rows of the result set until it is completely exhausted. Each row should be a new object of type DataRow.
This is what I have------ the test:
import unittest
from classFactory import build_row
class DBTest(unittest.TestCase):
def setUp(self):
C = build_row("user", "id name email")
self.c = C([1, "Steve Holden", "steve@holdenweb.com"])
def test_attributes(self):
self.assertEqual(self.c.id, 1)
self.assertEqual(self.c.name, "Steve Holden")
self.assertEqual(self.c.email, "steve@holdenweb.com")
def test_repr(self):
self.assertEqual(repr(self.c),
"user_record(1, 'Steve Holden', 'steve@holdenweb.com')")
if __name__ == "__main__":
unittest.main()
the script I am testing
def build_row(table, cols):
"""Build a class that creates instances of specific rows"""
class DataRow:
"""Generic data row class, specialized by surrounding function"""
def __init__(self, data):
"""Uses data and column names to inject attributes"""
assert len(data)==len(self.cols)
for colname, dat in zip(self.cols, data):
setattr(self, colname, dat)
def __repr__(self):
return "{0}_record({1})".format(self.table, ", ".join([" {0!r}".format(getattr(self, c)) for c in self.cols]))
DataRow.table = table
DataRow.cols = cols.split()
return DataRow
|
Dernière news : Fedora-Fr aux 15èmes Rencontres Mondiales du Logiciel Libre
Bien le bonjour, bien le bonsoir.
Après avoir installé skype en téléchargent la version :
J'ai essayé de le lancer, mais un problème surgit..
Impossible de lancer << skype >>
L’exécution du processus fils << skype >> a échoué (Aucun fichier ou dossier de ce type)
Merci de votre aide !
Excuse moi de cette demande d'aide trop " rapide ", après une recherche j'ai trouvé un poste avec l'aide nécessaire !
Pour ceux ayant un problème avec Fedora 64bits.
Merci elegouhinec.
Dernière modification par DiabineFR (31/10/2011 16:12:59)
C'est quand même écrit noir sur blanc dans la documentation : http://doc.fedora-fr.org/wiki/Webcams-I … en_64_bits
Fedora 19 : 1 Dell XPS M1330, 1 fixe custom et 1 Dell Latitude 6430u
J'ai exactement le même problème "Aucun fichier ou dossier de ce type". J'ai suivi ton lien qui est mort...
Après avoir suivi la manipulation évoquée plus haut j'ai toujours ce message d'erreur:
skype
skype: error while loading shared libraries: libQtDBus.so.4: cannot open shared object file: No such file or directory
Merci de votre aide!
Cordialement,
Dernière modification par badseed (14/01/2012 14:51:48)
"Be nice to people on your way up because you may meet them on your way down."
Jimmy Durante 1893-1980
yum provides */libQtDBus.so.4
1:qt-4.7.2-8.fc15.i686 : Qt toolkit
Dépôt : fedora
Correspondance depuis :
Nom de fichier : /usr/lib/libQtDBus.so.4
1:qt-4.7.4-7.fc15.i686 : Qt toolkit
Dépôt : updates
Correspondance depuis :
Nom de fichier : /usr/lib/libQtDBus.so.4
1:qt-4.7.4-7.fc15.i686 : Qt toolkit
Dépôt : installed
Correspondance depuis :
Nom de fichier : /usr/lib/libQtDBus.so.4
C'est pas parce que c'est difficile qu'on n'ose pas,
c'est parce qu'on ose pas que c'est difficile !
@nouvo09
Merci. Je dois installer quoi exactement s'il te plaît? Enfin quel est le nom du paquet si je dois bien installer quelque chose?
Merci à toi
"Be nice to people on your way up because you may meet them on your way down."
Jimmy Durante 1893-1980
Comme je subodore que tu es en 64 bits la commande est:
yum install 1:qt-4.7.4-7.fc15.i686
C'est pas parce que c'est difficile qu'on n'ose pas,
c'est parce qu'on ose pas que c'est difficile !
Désolé de faire trainer... Oui je suis bien en 64 bits.
Yum provide me renvoie:
yum provides */libQtDBus.so.4
Loaded plugins: langpacks, presto, refresh-packagekit
1:qt-4.8.0-0.23.rc1.fc16.i686 : Qt toolkit
Repo : fedora
Matched from:
Filename : /usr/lib/libQtDBus.so.4
1:qt-4.8.0-0.23.rc1.fc16.x86_64 : Qt toolkit
Repo : fedora
Matched from:
Filename : /usr/lib64/libQtDBus.so.4
1:qt-4.8.0-5.fc16.i686 : Qt toolkit
Repo : updates
Matched from:
Filename : /usr/lib/libQtDBus.so.4
1:qt-4.8.0-5.fc16.x86_64 : Qt toolkit
Repo : updates
Matched from:
Filename : /usr/lib64/libQtDBus.so.4
1:qt-4.8.0-5.fc16.i686 : Qt toolkit
Repo : @updates
Matched from:
Filename : /usr/lib/libQtDBus.so.4
1:qt-4.8.0-5.fc16.x86_64 : Qt toolkit
Repo : @updates
Matched from:
Filename : /usr/lib64/libQtDBus.so.4
Parmis ces choix, seule la version 4.8.0-5 semble vouloir s'installer (les autres le seraient déjà). Une fois installée, j'essaie de lancer Skype et le même message d'erreur qu'auparavant apparaît.
Dernière modification par badseed (14/01/2012 18:48:51)
"Be nice to people on your way up because you may meet them on your way down."
Jimmy Durante 1893-1980
Est-ce que le paquet qt.i686 est simplement installé ?
« …elle excitait si puissamment le désir, que je devins alors très incrédule sur sa vertu. »
À propos de Fœdora, dans la Peau de Chagrin (Balzac)
Et ça fonctionne ?
« …elle excitait si puissamment le désir, que je devins alors très incrédule sur sa vertu. »
À propos de Fœdora, dans la Peau de Chagrin (Balzac)
Et les erreurs, on doit les réclamer ?
« …elle excitait si puissamment le désir, que je devins alors très incrédule sur sa vertu. »
À propos de Fœdora, dans la Peau de Chagrin (Balzac)
C'est toujours ce message d'erreur:
skype
skype: error while loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory
"Be nice to people on your way up because you may meet them on your way down."
Jimmy Durante 1893-1980
Hello ! Salut !! Bonne année à tous ! :)
Utilisant skype, je me suis fait une petite docs perso de memos que je peux partager ici, en esperant que sa aidera... Chez moi sur F16 x64 ça fonctionne, cela dit petit pbm de sons qui crac quand je parle mais cela semble venir de pulseaudio/alsa/gnome etc.... tellement de reglages partout et gnome nouvelle version est beau mais pas pratiqu je trouve coté reglage ( impossible de trouver certain system-config- autrement que sur XFCE ou de les appeler via shell...). bref...
1 – Télécharger le .rpm de Skype directement sur le site officiel
2 – Lancer ces commandes en root dans un Terminal :
# yum -y install libXv.i686 libXScrnSaver.i686 qt.i686 qt-x11.i686 pulseaudio-libs.i686 pulseaudio-libs-glib2.i686 alsa-plugins-pulseaudio.i686
3 – Installer le .rpm de Skype ( rpm -ivh "lepacketageduskype.rpm" ) .
Voici qui aidera je pense.... Sinon utiliser la commande # yum provides libQtGui.so.4 ( ou fichier manquant ) afin de determiner le RPM mystere....
A noter que skype est en 64bits chez Ubuntu... Debian... et pas sur Fedora... bizarre....
@ bientôt :)
"^.^"
C'est toujours ce message d'erreur:
skype
skype: error while loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory
Utilise « yum provides » pour trouver le paquet fournissant cette bibliothèque, et ainsi de suite, comme plus haut... On ne va pas faire durer ce fil 10 pages pour chaque bibliothèque qui manquerait à Skype...
« …elle excitait si puissamment le désir, que je devins alors très incrédule sur sa vertu. »
À propos de Fœdora, dans la Peau de Chagrin (Balzac)
Pff désolé je ne m'étais pas aperçu qu'il demandait encore une autre bibliothèque!
Autant pour moi. Je vais faire comme tu dis jusqu'à ce que ça marche.
Merci de votre aide
Dernière modification par badseed (14/01/2012 22:41:44)
"Be nice to people on your way up because you may meet them on your way down."
Jimmy Durante 1893-1980
A noter que skype est en 64bits chez Ubuntu... Debian... et pas sur Fedora... bizarre....
Non. Le paquet Debian « 64 bits » pour Debian et Ubuntu contient une version 32 bits de Skype ; un tel paquet n'existe que pour faciliter son installation sous ces distributions qui ne géraient pas jusqu'à très récemment l'installation de paquets 32 bits sur une distribution 64 bits.
Sous Fedora, l'installation de RPM 32 bits, ainsi que de leurs dépendances, n'est pas un problème.
« …elle excitait si puissamment le désir, que je devins alors très incrédule sur sa vertu. »
À propos de Fœdora, dans la Peau de Chagrin (Balzac)
badseed a écrit :
C'est toujours ce message d'erreur:
skype
skype: error while loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory
Utilise « yum provides » pour trouver le paquet fournissant cette bibliothèque, et ainsi de suite, comme plus haut... On ne va pas faire durer ce fil 10 pages pour chaque bibliothèque qui manquerait à Skype...
Bonjour :
Entre ldd et yum provides, ça se vaut ?
Merci
Mais pourquoi BatMan porte-t-il son slip par-dessus son pantalon ???
Fedora 15 x86 64 Gnome 3 en dual avec Vista Premium (32 bits)
Portable HP Pavilion dv6 - 6820ef
|
01:24 am - How to publish PGP keys in DNS
LJ Preface
I recently wrestled with something, learned quite a lot, and came up with a document that I'm really rather proud of, that shares knowledge that's not all out there in one place anywhere else. Along the way I've written some software that I'm releasing, that makes all of what I've learned a lot easier, and may help make the world a little more secure. I'd like to share it here.
This is going to be a technical post. For that I apologize. The target of this post is anyone who has a GPG key that they'd like to expand to a greater audience, and who controls DNS for any of the email domains they publish. Anyone that I host DNS or mail for is also welcome to do this, if you use PGP, as part of the goal of writing this is to encourage adoption and use of these methods
The complete guide to publishing PGP keys in DNS
Introduction
Publishing PGP keys is a pain. There are many disjoint keyservers, three or four networks of which, which do (or don't) share information with each other. Some are corporate, some are private. And it's a crapshoot as to whose key is going to be on which, or worse, which will have the latest copy of a person's key.
For a long time, GPG has had a way to publish keys in DNS, but it hasn't been well documented. This document hopes to change that.
After reading this, you should:
Know the three ways to publish a key
Have at least a couple tools to do so
Have learned a bit more about DNS
The target audience for this post is a technical one. It's expected you understand what DNS is, and what an RFC and a resource record is.
There are three ways to publish a PGP key in DNS. Most modern versions of GPG can retrieve from all three, although it's not enabled by default. There are no compile-time options you need to enable it, and it's simple to turn on. Of the three key-publishing methods, there are two that you can't use at the same time, and there are advantages and disadvantages to each.
Advantages to DNS publishing of your keys
It's universal. Your DNS is your own, and you don't have to worry about which network of vastly-disconnected keyservers is caching your key.
Using DNS does not stop you from publishing via other means.
If you run an organization, you can easily publish all your employee-keys via this method, and in the same step, define a signing-policy, such that a person need only assign trust to your organization's "keysigning key" (or the CEO's key, or the CTO's), without the trouble of running a keyserver.
DNSSEC can be used as an additional trust-path vector.
You do not have to be searching DNS for keys in order to publish. On the same note, you do not have to be publishing in this manner to search for others there.
Disadvantages to DNS publishing
If you don't control your own DNS (or have a good relationship with your DNS admin), this isn't going to be as easy or even possible. Ideally, you want to be running BIND.
With two of the three methods listed here, you're going to need to be able to put a CERT record into your DNS. Most web-enabled DNS tools probably will not give you this ability. The third uses TXT records, which SPF has caused to be fairly universal in web-interfaces. However, it's also the least standards-defined of the three.
Using at least some of these methods, it's not always a "set it and forget it" procedure. You may need to periodically re-export your key and re-publish it, especially if you gain new signatures.
Using some of these methods, you're going to be putting some pretty large, pretty unwiedly lines in your DNS zones.
Not everyone will easily be able to retrieve them, but again, you can still publish other ways.
Using some of these methods, DNS is just a means to an end: you still need to publish your key elsewhere, like a webpage, and the DNS records just point at it.
Initial verifications of most of these seem to imply that only DSA keys are supported, although I welcome feedback. It seems the community is trying to get RSA keys to make a comeback. They're the only type supported by the gpg2.0 card, and they are the default keytype. There was a while where they weren't, though. Since writing this document, I've discovered that "new" RSA keys work, but ancient RSA keys with no subkeys tend to misbehave.
Turning on key-fetching via DNS
Inside your GPG "options" file, find the "auto-key-locate" line, and add "cert" and/or "pka" to the options.
auto-key-locate cert pka (as well as other methods, like keyserver URLs)
Don't be surprised if a lot of people don't use this method.
Note that you can also turn on two options during signature verification. They are specified in a "verify-options" clause in your config file, or on the command line, and they are (right from the GPG manpage):
pka-lookups
Enable PKA lookups to verify sender addresses. Note that
PKA is based on DNS, and so enabling this option may dis-
close information on when and what signatures are veri-
fied or to whom data is encrypted. This is similar to the
"web bug" described for the auto-key-retrieve feature.
And:
pka-trust-increase
Raise the trust in a signature to full if the signature
passes PKA validation. This option is only meaningful if
pka-lookups is set.
You can also use the same options on the command line (as you'll see in this document).
Types of PGP Key Records
DNS PKA Records
Relevant RFCs: None that I can find.
Other Docs: The GPG source and mailing lists.
Advantages
It's a TXT record. Easy to put in a zonefile with most management software.
No special tools required to generate, just three simple pieces of data.
Since it uses a special subzone, you can manage the _pka namespace in a separate zonefile.
GPG has an option, when verifying a signature, to look up these records (--verify-options pka-lookups), so it's doubly useful, both from a distribution and a verification point.
Disadvantages
As with IPGP certs, you're at the mercy of the URL. This doesn't put your key in DNS, just the location of it, and the fingerprint. Some clients may not be able to support https or http 1.1.
Not RFC standard.
Howto
Figure out which key you want to export:
%gpg --list-keys danm@prime.gushi.org
Warning: using insecure memory!
pub 1024D/624BB249 2000-10-02 <-- I'm going to use this one.
uid Daniel P. Mahoney <danm@prime.gushi.org>
uid Daniel Mahoney (Secondary Email) <gushi@gushi.org>
sub 2048g/DE20C529 2000-10-02
pub 1024R/309C17C5 1997-05-08
uid Daniel P. Mahoney <danm@prime.gushi.org>
Export the key to a file (I use keyid.pub.asc, but it can be anything)
%gpg --export --armor 624BB249 > 624BB249.pub.asc
Warning: using insecure memory!
%
Get the fingerprint for your key:
%gpg --list-keys --fingerprint 624BB249
gpg: WARNING: using insecure memory!
gpg: please see http://www.gnupg.org/faq.html for more information
pub 1024D/624BB249 2000-10-02
Key fingerprint = C206 3054 5492 95F3 3490 37FF FBBE 5A30 624B B249 <-- That bit is your fingerprint.
uid Daniel P. Mahoney <danm@prime.gushi.org>
uid Daniel Mahoney (Secondary Email) <gushi@gushi.org>
sub 2048g/DE20C529 2000-10-02
Copy the file somewhere, like your webspace. It need not live on the same server. It needs to be accessable by the url you create in the next step.
%cp 624BB249.pub.asc public_html/danm.pubkey.txt
Make up your text record. The format is:
danm._pka.prime.gushi.org. TXT
"v=pka1;fpr=C2063054549295F3349037FFFBBE5A30624BB249;uri=http://prime.gushi.org/danm.pubkey.txt"
We'll take this in several parts. The record label is simply the email address with "._pka." replacing the "@".
danm@prime.gushi.org becomes danm._pka.prime.gushi.org. Don't forget the trailing dot, if you're using the fully qualified name. I recommend sticking with fully-qualified, for simplicity.
The body of the record is also simple. The v portion is just a version. There's only one version as far as I can tell, 'pka1'. The fpr is the fingerprint, with all whitespace stripped, and in uppercase. The uri is the location a key can be retrieved from. All the "names" are lowercase, separated by semicolons.
Publish the above record in your DNS. Bump your serial number and reload your nameserver. If you're using DNSSEC, re-sign your zone.
Testing
Most of the tests we're going to do for these are essentially the same activity. See if our DNS server is handing out an answer, and then see if GPG can retrieve it.
A simple dig:
%dig +short danm._pka.prime.gushi.org. TXT
"v=pka1\;fpr=C2063054549295F3349037FFFBBE5A30624BB249\;uri=http://prime.gushi.org/danm.pubkey.txt"
(The backslashes before the semicolons are normal). Other than that, it seems to make sense and match what I put in.)
Test it with GPG. Rather than messing around with, and adding-from and deleting from live keyrings, you can do:
%echo "foo" | gpg --no-default-keyring --keyring /tmp/gpg-$$ --encrypt --armor --auto-key-locate pka -r you@you.com
(where you@you.com is the address of your primary key.)
The /tmp/gpg-$$ creates a random file named after your PID. What you should see, and what I see, is something like this:
gpg: WARNING: using insecure memory!
gpg: please see http://www.gnupg.org/faq.html for more information
gpg: keyring `/tmp/gpg-39996' created
gpg: requesting key 624BB249 from http server prime.gushi.org
gpg: key 624BB249: public key "Daniel P. Mahoney <danm@prime.gushi.org>" imported
gpg: public key of ultimately trusted key CF45887D not found
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Total number processed: 1
gpg: imported: 1
gpg: automatically retrieved `danm@prime.gushi.org' via PKA
gpg: DE20C529: There is no assurance this key belongs to the named user
pub 2048g/DE20C529 2000-10-02 Daniel P. Mahoney <danm@prime.gushi.org>
Primary key fingerprint: C206 3054 5492 95F3 3490 37FF FBBE 5A30 624B B249
Subkey fingerprint: CE40 B786 81E2 5CB9 F7D3 1318 9488 EB58 DE20 C529
It is NOT certain that the key belongs to the person named
in the user ID. If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N) y
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.10 (FreeBSD)
hQIOA5SI61jeIMUpEAf/UotgWP8VQC9VTY36HaZeXO1CTFk90x0qlPrAhJk9YaoA
2eHNKZSoHKqaLjzTbaWnWHnNZu0IllIS+qrAwNeIAhswfzDoc8Q9+/4sGSR3LmxA
8SEwrJIvLmGVbqJEtnH8TTHIEao/lpL/d+ul4nLfbXRn0NW+MsaCAi8UsjbLlJeV
n4p0GQlpDoZCE55DTwMzfWMT84YVwuXTesuN+i7sSyJn2hT1rXuK1BCVcsgTcKdy
QhIo3EfKBlfFp74yiU7QCmlAujD6U6a93mmxezPIHVx/WGXgPExVRGgEzfT/tUcI
IQ2xMDUv4BF05hgm04GPGCbBY431j4UkdWWI6bvMLwgA2i01NmflH/6Z8+ss6J1M
e3RWnR7TPl5lDkXFBtLGAzO+HrsC5A32SbkTw+WsljCQLifJ2EalfoJ1QGY4Sp3v
H2YunwZLVPTc+D2JnrXfqNmi5zYZio8by3c8L0CgWdMwZ7PPxZpTOLN77/MIjBkJ
EBb8Z6SZCgzTIhN5z56ZgWFvmSKf1vKkeUcrgxMs+DnA+XqBMJ9w520JwoTLjJza
syrlYVhd+ktY21DYB9OJ5MZx2HMAtkUDRAzW1zoLcehk1kdZNzhpjU5hqSjT8/GN
trKFeqkmKemrq2GvMNyJyrEOB8e7KgbmXa95YKH0Wh2D4SWpXukegyCspmY4tDE+
uckaFSao+48g8D6vs1irGSxBRjyhD/jPDblrgpo=
=NbgW
-----END PGP MESSAGE-----
%
The "insecure memory" warning is a silly warning that the only way to turn off is to run GPG setuid root.
You can see in the output that the key comes from PKA.
The "it is NOT certain" warning has nothing to do with the fact that it came from DNS. You will get that warning every
time you use that key (or any gpg key) until you have edited it and assigned ownertrust to it, or until the key is
signed with a trusted signature, either from your personal web of trust, or from a signing service like the pgp.com
directory.
Ask other people to run it for you and send you the resulting blob. You should be able to decrypt it with your private key.
PGP CERT Records
Also known as: The "big" CERT record.
Relevant RFCs: RFC 2538, RFC 4398, specifically sections 2.1 and 3.3
Advantages
DNS is all you need. You don't have to host the key elsewhere. As a DNS nerd, this strikes me as very cool.
Suprisingly easy to verify with dig, if you have a base64 converter handy (openssl includes one)
Disadvantages
These records can get big. Really big. Especially if you have photo-ids on your keys. You can play with export-options to shrink it somewhat. Big dns packets may require EDNS, or dns-over-tcp, which not everyone supports, but support is becoming more widespread as a result of DNSSEC awareness.
Requires the make-dns-cert tool, which isn't built by default.
Requires you to have some control over your actual zonefile. Most control panels won't cut it.
Make-dns-cert currently generates a very ugly record for this.
How to
As before, the first step is to figure out which key we want.
%gpg --list-keys danm@prime.gushi.org
Warning: using insecure memory!
pub 1024D/624BB249 2000-10-02 <-- I'm going to use this one.
uid Daniel P. Mahoney <danm@prime.gushi.org>
uid Daniel Mahoney (Secondary Email) <gushi@gushi.org>
sub 2048g/DE20C529 2000-10-02
pub 1024R/309C17C5 1997-05-08
uid Daniel P. Mahoney <danm@prime.gushi.org>
We export the key, but this time, it needs to be binary.
%gpg --export --armor 624BB249 > 624BB249.pub.bin
Warning: using insecure memory!
%
We run make-dns-cert on it. make-dns-cert comes with no manual or docs, but running with -h gives you all the clue you need.
make-dns-cert
-f fingerprint
-u URL
-k key file
-n DNS name
So, then, make-dns-cert -n danm.prime.gushi.org. -k 624BB249.pub.bin
%make-dns-cert -n danm.prime.gushi.org. -k 624BB249.pub.bin
danm.prime.gushi.org. TYPE37 \# 1298 0003 0000 00 9901A20439D8DAF1110400F770EC6AA006076334BEC6DB6FBB237DC194BC0AB8
302C8953F04C28FC2085235D4F10EFA027234FBD63D142CCADD5213AD2B79A22C89ED9B4138370D8220D0F987F993A5364A4A7AC3D42F3765C384
71DDD0FF3372E4AE6F7BEE1E18EF464A0BEB5BBE860A08238891455EBE7CB53D567E981F78ADBD263206B0493ADCB74DD00A0FF0E9A1CD245415E
CEF59435162AFCE4CDD14BC70400EA38FF501256E773DEA299404854D99F4EDB2757AA911A9C77C68AB8D6622E517A556C43D21F0523C568F016C
D0DB89EF435F0D53B4E07434213F899E6578955DC2C147931E7B6901C9FD8A02705417D69A879B3CC196D2AC2EAEF311192EE89ABAF5A60942167
B4625735FCBDFB5DE0E3AC1236A53FA4D7CDD7D75F5DE85AF50400867D9546B28B79AF10541053CF4AB06A6171BFD21458BFD12AF1AE2B2401CAD
8851661F8AF6602F80EDAC99C79616BE1F910F4156242003779C68D7A079A8B18F89DD293E1B247E7420471300A4A0730AA61DE281CCC211FC405
A0A8A79877999FF9042AD892AB927DA371E8883BBB370AB7A97841408C3486BB18598CF2559BB42844616E69656C20502E204D61686F6E6579203
C64616E6D407072696D652E67757368692E6F72673E884E04101102000E050239D8DAF1040B030102021901000A0910FBBE5A30624BB249FA2E00
9B057503ED498695AE5ED73CA1B98EBAEE13F717E500A0921E0D92724459100266FEBBC29E911C8B0F530BB43244616E69656C204D61686F6E657
920285365636F6E6461727920456D61696C29203C67757368694067757368692E6F72673E8860041311020020050245D49FD7021B23060B090807
030204150208030416020301021E01021780000A0910FBBE5A30624BB249158400A082C8AF43DA8B85F740D6B1A6E9FF0B4490520B8C00A08F77D
21FBF86C842963E8090DC0646D1DD7F95C9B9020D0439D8DAF4100800F64257B7087F081772A2BAD6A942F305E8F95311394FB6F16EB94B3820DA
01A756A314E98F4055F3D007C6CB43A994ADF74C648649F80C83BD65E917D4A1D350F8F5595FDC76524F3D3D8DDBCE99E1579259CDFDB8AE744FC
5FC76BC83C5473061CE7CC966FF15F9BBFD915EC701AAD35B9E8DA0A5723AD41AF0BF4600582BE5F488FD584E49DBCD20B49DE49107366B336C38
0D451D0F7C88B31C7C5B2D8EF6F3C923C043F0A55B188D8EBB558CB85D38D334FD7C175743A31D186CDE33212CB52AFF3CE1B1294018118D7C84A
70A72D686C40319C807297ACA950CD9969FABD00A509B0246D3083D66A45D419F9C7CBD894B221926BAABA25EC355E9320B3B00020207FF5E1A3C
C5DA00E1E94EC8EF6C7FE9B49D944C71D8BBC817DD8E64A7344B9E48392E0B833B3B1DB7E6D5A38BE2826DEF0060F78C6417871EAF1CFBCBC47D2
7E93718D975E0A3A36D868C021D6B771740CE2918307D69D614BBF0632DC31932EA31397A7F3B04618C9A76C2F38265C7037E303EDD8AEF03D069
208E3FE9C4EA77D83E6311ED36C013D58C54E914B263A459E22D463A0288510C4752B99C163EEA0A55686979691AB0D9F9AA0C06C834446D7A723
EC534D819301382621ACF8930C74E9FD28C8797718AEC2C30CF601E24194B799234104A3D6239657B1D4AD545BDAA637F61541435CB51B4D138FB
F55E1A9FD2EED860E4459D6795B6FCCA23155A8846041811020006050239D8DAF4000A0910FBBE5A30624BB249415A009E37BCFDC64E76CBF6A86
82B85EA161BD1DFB793DF00A0C471BC7B9723535CD855D8FF1EB93F01E251B698
%
The program prints that all on one line.
Immediately, we notice a few things.
The record type isn't "CERT", it's "TYPE37". This confused me for a while until I discovered RFC3597 Basically, it's a way that a DNS server can handle a resource
record it doesn't know about, by giving it some special fields like the "#", as well as a length (which is the 1298 you see there).
The rest of the record is on one line. I wrapped it for the purposes of brevity. If I were using this in a zonefile, I would need to be careful that I wrapped it on a byte-boundary (every two characters is a byte). If I miss the boundary, named will refuse to load it, dnssec-signzone won't touch it, etc.
So the thing is ugly and you don't want to touch it. The easiest way to work with it is to drop all that into a file:
%make-dns-cert -n danm.prime.gushi.org. -k 624BB249.pub.bin > 624BB249.big.cert
And then either read it into your editor, or tack it on like this:
%cat 624BB249.big.cert >> your.zonefile
Be sure to make a backup first. Either way, you never have to copy/paste the raw hex and worry about newlines being inserted where you don't want them.
Before you reload your zone, you might want to use named-checkzone on it first:
prime# named-checkzone gushi.org gushi.org.hosts
zone gushi.org/IN: loaded serial 2009102909
OK
prime#
Voice of experience: You may want to dial the TTL (which controls how long servers will cache your data) way down on the record above. It's not hard, just put a number before the TYPE37, with a space, i.e:
danm.prime.gushi.org. 30 TYPE37
This way if it all goes terribly wrong, or you need to make changes, it won't be cached for very long.
If it looks okay, bump your serial number and reload.
Testing
As above, you can dig, but you won't be able to easily read the results:
prime# dig +short danm.prime.gushi.org CERT
;; Truncated, retrying in TCP mode.
PGP 0 0 mQGiBDnY2vERBAD3cOxqoAYHYzS+xttvuyN9wZS8CrgwLIlT8Ewo/CCF I11PEO+gJyNPvWPRQsyt1SE60reaIsie2bQTg3DYIg0PmH+ZOlNkpKes
PULzdlw4Rx3dD/M3Lkrm977h4Y70ZKC+tbvoYKCCOIkUVevny1PVZ+mB 94rb0mMgawSTrct03QCg/w6aHNJFQV7O9ZQ1Fir85M3RS8cEAOo4/1AS
Vudz3qKZQEhU2Z9O2ydXqpEanHfGirjWYi5RelVsQ9IfBSPFaPAWzQ24 nvQ18NU7TgdDQhP4meZXiVXcLBR5Mee2kByf2KAnBUF9aah5s8wZbSrC
6u8xEZLuiauvWmCUIWe0Ylc1/L37XeDjrBI2pT+k183X119d6Fr1BACG fZVGsot5rxBUEFPPSrBqYXG/0hRYv9Eq8a4rJAHK2IUWYfivZgL4DtrJ
nHlha+H5EPQVYkIAN3nGjXoHmosY+J3Sk+GyR+dCBHEwCkoHMKph3igc zCEfxAWgqKeYd5mf+QQq2JKrkn2jceiIO7s3CrepeEFAjDSGuxhZjPJV
m7QoRGFuaWVsIFAuIE1haG9uZXkgPGRhbm1AcHJpbWUuZ3VzaGkub3Jn PohOBBARAgAOBQI52NrxBAsDAQICGQEACgkQ+75aMGJLskn6LgCbBXUD
7UmGla5e1zyhuY667hP3F+UAoJIeDZJyRFkQAmb+u8KekRyLD1MLtDJE YW5pZWwgTWFob25leSAoU2Vjb25kYXJ5IEVtYWlsKSA8Z3VzaGlAZ3Vz
aGkub3JnPohgBBMRAgAgBQJF1J/XAhsjBgsJCAcDAgQVAggDBBYCAwEC HgECF4AACgkQ+75aMGJLskkVhACggsivQ9qLhfdA1rGm6f8LRJBSC4wA
oI930h+/hshClj6AkNwGRtHdf5XJuQINBDnY2vQQCAD2Qle3CH8IF3Ki utapQvMF6PlTETlPtvFuuUs4INoBp1ajFOmPQFXz0AfGy0OplK33TGSG
SfgMg71l6RfUodNQ+PVZX9x2Uk89PY3bzpnhV5JZzf24rnRPxfx2vIPF RzBhznzJZv8V+bv9kV7HAarTW56NoKVyOtQa8L9GAFgr5fSI/VhOSdvN
ILSd5JEHNmszbDgNRR0PfIizHHxbLY7288kjwEPwpVsYjY67VYy4XTjT NP18F1dDox0YbN4zISy1Kv884bEpQBgRjXyEpwpy1obEAxnIByl6ypUM
2Zafq9AKUJsCRtMIPWakXUGfnHy9iUsiGSa6q6Jew1XpMgs7AAICB/9e GjzF2gDh6U7I72x/6bSdlExx2LvIF92OZKc0S55IOS4Lgzs7Hbfm1aOL
4oJt7wBg94xkF4cerxz7y8R9J+k3GNl14KOjbYaMAh1rdxdAzikYMH1p 1hS78GMtwxky6jE5en87BGGMmnbC84JlxwN+MD7diu8D0Gkgjj/pxOp3
2D5jEe02wBPVjFTpFLJjpFniLUY6AohRDEdSuZwWPuoKVWhpeWkasNn5 qgwGyDREbXpyPsU02BkwE4JiGs+JMMdOn9KMh5dxiuwsMM9gHiQZS3mS
NBBKPWI5ZXsdStVFvapjf2FUFDXLUbTROPv1Xhqf0u7YYORFnWeVtvzK IxVaiEYEGBECAAYFAjnY2vQACgkQ+75aMGJLsklBWgCeN7z9xk52y/ao
aCuF6hYb0d+3k98AoMRxvHuXI1Nc2FXY/x65PwHiUbaY
It's still ugly, but it's not AS ugly because it's base64, which includes spaces, at least, and is easier to search for a pattern. Base64 can also be easily wrapped on any boundary, which is nice.
You can run your existing exported key through a base64 converter, like the one built into the openssl binary, if you want to compare:
%cat 624BB249.pub.bin | openssl enc -base64
mQGiBDnY2vERBAD3cOxqoAYHYzS+xttvuyN9wZS8CrgwLIlT8Ewo/CCFI11PEO+g
JyNPvWPRQsyt1SE60reaIsie2bQTg3DYIg0PmH+ZOlNkpKesPULzdlw4Rx3dD/M3
Lkrm977h4Y70ZKC+tbvoYKCCOIkUVevny1PVZ+mB94rb0mMgawSTrct03QCg/w6a
(...etc...)
OPv1Xhqf0u7YYORFnWeVtvzKIxVaiEYEGBECAAYFAjnY2vQACgkQ+75aMGJLsklB
WgCeN7z9xk52y/aoaCuF6hYb0d+3k98AoMRxvHuXI1Nc2FXY/x65PwHiUbaY
%
Now, while you could compare things byte-by-byte here, what I've done as a "casual check" is just pick random strings in the text and see if they match up. For example, you can see that "reaIsie2" is present in both. They both start with and end with similar strings. The real test, of course, is to see if GPG recognizes it as a valid key.
By the way, since I use DNSSEC, dnssec-signzone rewrites this record into the proper "presentation format" for me, which is base64. If you want a similar function, you can use named-compilezone to get some of the same effects.
Testing with gpg
As above, the command to test this is remarkably simple:
%rm /tmp/gpg-*
%echo "foo" | gpg --no-default-keyring --keyring /tmp/gpg-$$ --encrypt --armor --auto-key-locate cert -r danm@prime.gushi.org
gpg: keyring `/tmp/gpg-39996' created
gpg: key 624BB249: public key "Daniel P. Mahoney <danm@prime.gushi.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: automatically retrieved `danm@prime.gushi.org' via DNS CERT
gpg: DE20C529: There is no assurance this key belongs to the named user
pub 2048g/DE20C529 2000-10-02 Daniel P. Mahoney <danm@prime.gushi.org>
Primary key fingerprint: C206 3054 5492 95F3 3490 37FF FBBE 5A30 624B B249
Subkey fingerprint: CE40 B786 81E2 5CB9 F7D3 1318 9488 EB58 DE20 C529
It is NOT certain that the key belongs to the person named
in the user ID. If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N) y
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.10 (FreeBSD)
hQIOA5SI61jeIMUpEAf/Sx7MKWm+e9EpUTSrDaBp4nJfDcBeqbYJulPRbDZz7eVW
2+ol6sG0jWjuirbG1YppZccEr9mgqaQujdSXb/bleD8POS0TEWuf3aPswFQvHf90
NLEzHt6BnfLoeobXXxyCflNaGX8zW+XgJtwZqAc2+jietuz8MOUhrf5m17CsW/wZ
IuEqwaek+K1irJp+w3rhaE08Jzb/S4CCifeW9J3mK57chQoPOu7Nz3rY666YKp/3
9T9StOgmFiNpvtFPNy4N7hHMHvbQwRsKlnkl+a7n0Aq2+OF4d1+/k2EE4uSGgcz0
oHvee8DnuOx3P92mO4Jz5/0O0lwBD7I51iOjzUurTAgAiIM5sHV8/QFCVzH9Ule+
gd8Wo5momcphkU/AXpce5Xgi/Vm4oGQ0x0queii8afUrzkpeN5SuwgQfAdOPiXW5
2bo527jBllxOxjeBasfky82XheTnLzbAQNvQNTEM9zE7zCl1LQJUZEJ1hVzcOevI
s+cm/AaGII9VkrAtSt3aLSRZuRJHFmhGvYd2Hz5WzcV1YFjXXP1eLwfetDBlaeB9
/K5v4hZBkIZPbHX0DcLVrP96mCIT4wCBYSJw+I6n0E6Fz3IfybQG2HMfqWp966/c
00ijx/aRDh42Dr/fTropuzzFzQr7weYDa1JnN3Zoftv6Zb/n+NcrmMiDCH8jJV6E
uMkaeeB5Mv7ssDQ9kPhO989CHFcznrE1lgOxjX8=
=NTLY
-----END PGP MESSAGE-----
%
Okay, as above, try to decrypt that with your private key.
IPGP CERT Records
Also known as: The "little" or "short" CERT record. (These terms are purely my own).
Relevant RFCs: RFC 2538, RFC 4398, specifically sections 2.1 and 3.3
IPGP certs are interesting. It's basically the same pieces of infomation that are in the PKA record, as above, except that it's supported by an RFC. Despite the RFC compliance, I am not sure if any non-gpg client knows to look for them. However, because it's a DNS cert, make-dns-cert encodes the information in binary, and your DNS server will see it in base64. So verifying it visually is harder than verifying either of the above.
Advantages
Disadvantages
Relies on the URI scheme. I haven't yet been able to get a definitive list of what uri schemes are supported, although I've seen http and finger. I've also seen reports that unless gpg is compiled against curl, http 1.1 is not supported (what this actually means is that any host that supports SSL will probably work, because of some of the nuances of SSL).
With PGP certs and IPGP certs, GPG will only parse the first key it gets, so if you publish both, and one doesn't work, there's no failover. I've argued that this should be fixed.
Requires make-dns-cert, which is not built in GPG by default. (But see "A Better Way" below)
Requires publication in your main DNS zone.
Despite being RFC compliant, GPG has additional trust vectors for PKA but not this, despite the fact that they share basically the same information.
Harder to verify with dig.
Howto
Note that some of these steps are redundant. If you're already doing a PKA key, skip to step 5.
Dig:
%gpg --list-keys danm@prime.gushi.org
Warning: using insecure memory!
pub 1024D/624BB249 2000-10-02 <-- I'm going to use this one.
uid Daniel P. Mahoney <danm@prime.gushi.org>
uid Daniel Mahoney (Secondary Email) <gushi@gushi.org>
sub 2048g/DE20C529 2000-10-02
pub 1024R/309C17C5 1997-05-08
uid Daniel P. Mahoney <danm@prime.gushi.org>
Export the key to a file (I use keyid.pub.asc, but it can be anything)
%gpg --export --armor 624BB249 > 624BB249.pub.asc
Warning: using insecure memory!
%
Get the fingerprint for your key:
%gpg --list-keys --fingerprint 624BB249
gpg: WARNING: using insecure memory!
gpg: please see http://www.gnupg.org/faq.html for more information
pub 1024D/624BB249 2000-10-02
Key fingerprint = C206 3054 5492 95F3 3490 37FF FBBE 5A30 624B B249 <-- That bit is your fingerprint.
uid Daniel P. Mahoney <danm@prime.gushi.org>
uid Daniel Mahoney (Secondary Email) <gushi@gushi.org>
sub 2048g/DE20C529 2000-10-02
As above, run make-dns-cert. This time we use the -n, -f, and -u options:
%make-dns-cert -n danm.prime.gushi.org. -f C2063054549295F3349037FFFBBE5A30624BB249 -u http://prime.gushi.org/danm.pubkey.txt
danm.prime.gushi.org. TYPE37 \# 64 0006 0000 00 14 C2063054549295F3349037FFFBBE5A30624BB249
687474703A2F2F7072696D652E67757368692E6F72672F64616E6D2E7075626B65792E747874
%
Put the above in DNS. All on one line. Optionally add a TTL.
IMPORTANT: make sure you don't have any other CERT records with the same label (i.e. a "big" cert, as above). While it won't break things, you have no control over which (of multiple) people will get.
Reload your zone, and test. Testing will probably look VERY MUCH like the above, but here are the steps anyway:
Testing
Dig:
%dig +short danm.prime.gushi.org CERT
6 0 0 FMIGMFRUkpXzNJA3//u+WjBiS7JJaHR0cDovL3ByaW1lLmd1c2hpLm9y Zy9kYW5tLnB1YmtleS50eHQ=
Sadly, I haven't come across an easy way to decipher it yet, but there's always gpg.
GPG:
Since we're fetching the same kind of record, the command is exactly the same as before:
%echo "foo" | gpg --no-default-keyring --keyring /tmp/gpg-$$ --encrypt --armor --auto-key-locate cert -r danm@prime.gushi.org
gpg: WARNING: using insecure memory!
gpg: please see http://www.gnupg.org/faq.html for more information
gpg: keyring `/tmp/gpg-39996' created
gpg: requesting key 624BB249 from http server prime.gushi.org
gpg: key 624BB249: public key "Daniel P. Mahoney <danm@prime.gushi.org>" imported
gpg: public key of ultimately trusted key CF45887D not found
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Total number processed: 1
gpg: imported: 1
gpg: automatically retrieved `danm@prime.gushi.org' via DNS CERT
gpg: DE20C529: There is no assurance this key belongs to the named user
pub 2048g/DE20C529 2000-10-02 Daniel P. Mahoney <danm@prime.gushi.org>
Primary key fingerprint: C206 3054 5492 95F3 3490 37FF FBBE 5A30 624B B249
Subkey fingerprint: CE40 B786 81E2 5CB9 F7D3 1318 9488 EB58 DE20 C529
It is NOT certain that the key belongs to the person named
in the user ID. If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N) y
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.10 (FreeBSD)
hQIOA5SI61jeIMUpEAgApZurJi3hZmDaUFjB2j93eX/lTl96xq6T//sz6nT6jcTx
IPnq1RN8IrIQPjDBByHdqOZBT5hhblr9xi7NKIIv3W4q4L0z0fJx7NERPZNvn/H0
DkTwfDgAvCRxcKjenpLSwKZFwLjyfS7wjlDr3HFX7Tila0hbzplHslvgTE0QMcd7
7oNmEyOL3z+yZr/afQGp2wpzDv4YB9zOiNHcHcenqX0yrtiqKozZ9VAldi53rb/q
f38lwInbveyAcEQkE2iFwhRsbMR4VLcsBoxY6D9brsBprt23ey8Rnv+bQ9IAR0VN
/WYzU4zUUqb8HmpNFXQLEgH8A2BENw+bxkVYHjSfWQf/cBSGAzfBQQVJ7qp4tN0Z
FRVe51dokbU4NM9tGBdCzFHWARVkQX/Ulekd4F3sxBR/sum1UOT2xl2THVBz7/Pq
UCrTRPA0uH4dIbL5JpfGZhqsJ079+wmUWUtJIiO2wXi7ePEA/DrBC6p7jlmjyYN/
AeSKcPoTeLX+zryV5bECx4RO6S56EEcy0Ns0pASGMsgUnKL6Adrv3Y6ea3ZAOQMn
H9Uo28BKTKNUvUaBpN8cV8jIbKYPPW9i04kvEQRqs5rdamERCY1vVTqYTrcLsNqz
fF3KopX+V82X1oE2QuGdFfd8mK57ZXJL3VRUrfohQjhfYNKzougiP46rQQv79MYT
j8kazWyJUuufm6NVco1/35Zdp1UhHu8qTgXxrjo=
=zY9G
-----END PGP MESSAGE-----
%
Strangely, the output doesn't say what PKA does (a PKA retrieval has a line about fetching via HTTP), however, by checking my webserver logs, I can see it retrieved it from there:
%tail -200 /usr/local/apache/logs/prime.gushi.org.log | grep pubkey | tail -1
prime.gushi.org 72.9.101.130 - - [28/Oct/2009:23:50:43 -0400] "GET /danm.pubkey.txt HTTP/1.1" 200 4337 "-" "-"
%
As usual, test decryption, etc. You're done. Figure out which of these are useful to you. When someone asks for your public key, tell them to run the above command instead of mailing it to them.
Look into embracing DNSSEC. With a signed root, there's a good trust-path vector here. Who knows, maybe some day GPG will be dnssec-aware so it will give more credit to a secure DNS transaction.
A better way
In reading over a lot of these commands, I've come across a few problems with the tools involved. They either require you to assemble large records by hand, or manipulate huge files.
DNS has also come a long way since these tools were written, and RFCs have solidified that have determined the "presentation format" (i.e. the "master file format") of what cert records should look like. On top of everything, the make-dns-cert tool is not built by default, and is not present in most binary distributions (rpm's, apt).
Thus, I took it upon myself to rewrite make-dns-cert as a shell script.
Advantages
Extracts your key for you (takes a keyid as the argument).
Formats all three record types for you, you can pipe it right into your zone file.
Takes email address as an argument, generates record label.
No compiling needed.
Should work with most systems. Requires openssl and sed, a few other standard utilities.
Generates base64-ified CERT records, split into easy, manageable pieces.
Generates DNS-friendly comments, so repeating tasks are easy to reference.
(Eventually) available as a tarball, or as a paste-and-go script.
Arguments are in logical DNS record order. email address keyid [url]
Generates a cert record without a URI (this is legal per RFC4398)
You can see sample output here, and you can view it [here](http://www.gushi.org/make-dns-cert/make-dns-cert.sh.txt). Depending on your MIME settings, you can probably get a download link if you go here. If you see the script, you can just save-as.
README, Changelog, TODO coming soon.
Other notes
I'm not 100 percent sure (mainly because I haven't tried), but with IPGP cert, and PKA, I believe I could in theory point at a keyserver directly, for example, specify a uri of http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xB0307039309C17C5. I'm a bit dubious about the question marks and equals-signs, or if I might have to uri-encode things. It's something to be tried.
I'm trying to convince the GPG people that this would be much better adopted if the make-dns-cert tool was built/included by default, or if its function were included in gpg rather than a third-party tool. This is analagous as to how dnssec-keygen is used to generate SSHFP DNS records.
It doesn't do any actual cryptography, just some binary conversion, so in theory it could be rewritten in pure-perl, so there's nothing to compile.
I've made the argument to the GPG developers that if multiple CERT records are available, all should be tried if one fails. So far, if multiple exist, only the first received is parsed, and of course, DNS round-robins the answers by default.
It took me quite a lot of trial and error to realize that there's a difference between "modern" RSA keys, like this:
%gpg --list-keys --fingerprint gushi@prime.gushi.org
pub 2048R/CF45887D 2009-10-29
Key fingerprint = FCB0 485E 050D DDFA 83C6 76E3 E722 3C05 CF45 887D
uid Gushi Test <gushi@prime.gushi.org>
sub 2048R/C9761244 2009-10-29
and ancient RSA keys like this pgp2.6.2 monster:
%gpg --list-keys --fingerprint danm@prime.gushi.org
pub 1024R/309C17C5 1997-05-08
Key fingerprint = 04 4B 1A 2E C4 62 95 73 73 A4 EA D0 08 A4 45 76
uid Daniel P. Mahoney <danm@prime.gushi.org>
Note the lack of a subkey there. Note the weird fingerprint. I have not been able to get this key to properly export with gpg. If someone knows the Deep Magic, let me know.
References
Blog posts and list threads
While researching this I came across little more than a few blog posts, and a few short discussions on the gpg-devel mailing list.
A blog entrythat seems to have things mostly right.
GPG Mailing List Discussion which seems to date to when these features were first added.
My own thread on the gnupg-users mailing list that led up to this doc.
A slideshow of a talk given on PKA (really the only doc I could find with regard to PKA). Note that this is a postscript doc, for reasons I cannot fathom.
RFCs
RFC 3597 defines the odd format of the records that make-dns-cert generates, if it confuses you.
RFC 2538, which was superseded by RFC 4398,defines the format for a CERT record.
Todo
At least one GPG enthusiast has suggested to me that any tools I write to handle keys should simply be able to insert them using nsupdate. I don't disagree, but there's a complicated metric there as some of these require manipulation of a site's main zone, or at the very least, many subzones. In doing this I'd also like to find out a bit about how to do nsupdate with sig(0) and KEY records, which with the right policies would mean I could do this without touching named.conf. That may be the subject of a whole other howto.
I need to get the shell script cleaned up a bit more, and generate proper docs, and start tracking it with version control.
I should probably get the gumption up to formally license all this stuff. For right now, I'd declare it under the ISC License.
About the author
Dan Mahoney is a Systems Admin in the Bay Area, California. In his spare time he enjoys thinking for those brief fleeting moments what he would do if he had more free time. Keyid 624BB249, or email address danm@prime.gushi.org.
About this Document
This document was written in gnu nano, and HTML was generated using Markdown.
Originally published on my livejournal at http://gushi.livejournal.com/524199.html, its main home is at http://www.gushi.org/make-dns-cert/HOWTO.html, which is where later versions will be published.
Free to use, comments to the above email address are welcome.
$Id: HOWTO.txt,v 1.2 2009/10/30 07:48:12 danm Exp $
|
Given a function which produces a random integer in the range 1 to 5, write a function which produces a random integer in the range 1 to 7.
What is a simple solution?
What is an effective solution to reduce memory usage or run on a slower CPU?
This is equivalent to Adam Rosenfield's solution, but may be a bit more clear for some readers. It assumes rand5() is a function that returns a statistically random integer in the range 1 through 5 inclusive.
int rand7()
{
int vals[5][5] = {
{ 1, 2, 3, 4, 5 },
{ 6, 7, 1, 2, 3 },
{ 4, 5, 6, 7, 1 },
{ 2, 3, 4, 5, 6 },
{ 7, 0, 0, 0, 0 }
};
int result = 0;
while (result == 0)
{
int i = rand5();
int j = rand5();
result = vals[i-1][j-1];
}
return result;
}
How does it work? Think of it like this: imagine printing out this double-dimension array on paper, tacking it up to a dart board and randomly throwing darts at it. If you hit a non-zero value, it's a statistically random value between 1 and 7, since there are an equal number of non-zero values to choose from. If you hit a zero, just keep throwing the dart until you hit a non-zero. That's what this code is doing: the i and j indexes randomly select a location on the dart board, and if we don't get a good result, we keep throwing darts.
Like Adam said, this can run forever in the worst case, but statistically the worst case never happens. :)
There is no (exactly correct) solution which will run in a constant amount of time, since 1/7 is an infinite decimal in base 5. One simple solution would be to use rejection sampling, e.g.:
int i;
do
{
i = 5 * (rand5() - 1) + rand5(); // i is now uniformly random between 1 and 25
} while(i > 21);
// i is now uniformly random between 1 and 21
return i % 7 + 1; // result is now uniformly random between 1 and 7
This has an expected runtime of 25/21 = 1.19 iterations of the loop, but there is an infinitesimally small probability of looping forever.
I'd like to add another answer, in addition to my first answer. This answer attempts to minimize the number of calls to
The entropy of a random variable is a well-defined quantity. For a random variable which takes on N states with equal probabilities (a uniform distribution), the entropy is log
So how do we do it? We generate an
rand5(). For example, if our RNG chose a = 1 for all ii, then ignoring the fact that that isn't very random, that would correspond to the real number 1/5 + 1/52 + 1/53 + ... = 1/4 (sum of a geometric series).
Ok, so we've picked a random real number between 0 and 1. I now claim that such a random number is uniformly distributed. Intuitively, this is easy to understand, since each digit was picked uniformly, and the number is infinitely precise. However, a formal proof of this is somewhat more involved, since now we're dealing with a continuous distribution instead of a discrete distribution, so we need to prove that the probability that our number lies in an interval [
Now that we have a random real number selected uniformly from the range [0, 1], we need to convert it to a series of uniformly random numbers in the range [0, 6] to generate the output of
Taking the example from earlier, if our
Ok, so we have the main idea, but we have two problems left: we can't actually compute or store an infinitely precise real number, so how do we deal with only a finite portion of it? Secondly, how do we actually convert it to base 7?
One way we can convert a number between 0 and 1 to base 7 is as follows:
To deal with the problem of infinite precision, we compute a partial result, and we also store an upper bound on what the result could be. That is, suppose we've called
So, keeping track of the current number so far, and the maximum value it could ever take, we convert
And that's the algorithm -- to generate the next output of
import random
rand5_calls = 0
def rand5():
global rand5_calls
rand5_calls += 1
return random.randint(0, 4)
def rand7_gen():
state = 0
pow5 = 1
pow7 = 7
while True:
if state / pow5 == (state + pow7) / pow5:
result = state / pow5
state = (state - result * pow5) * 7
pow7 *= 7
yield result
else:
state = 5 * state + pow7 * rand5()
pow5 *= 5
if __name__ == '__main__':
r7 = rand7_gen()
N = 10000
x = list(next(r7) for i in range(N))
distr = [x.count(i) for i in range(7)]
expmean = N / 7.0
expstddev = math.sqrt(N * (1.0/7.0) * (6.0/7.0))
print '%d TRIALS' % N
print 'Expected mean: %.1f' % expmean
print 'Expected standard deviation: %.1f' % expstddev
print
print 'DISTRIBUTION:'
for i in range(7):
print '%d: %d (%+.3f stddevs)' % (i, distr[i], (distr[i] - expmean) / expstddev)
print
print 'Calls to rand5: %d (average of %f per call to rand7)' % (rand5_calls, float(rand5_calls) / N)
Note that
Also note that the numbers here get
In one run of this, I made 12091 calls to
In order to port this code to a language that doesn't have arbitrarily large integers built-in, you'll have to cap the values of
(I have stolen Adam Rosenfeld's answer and made it run about 7% faster.)
Assume that rand5() returns one of {0,1,2,3,4} with equal distribution and the goal is return {0,1,2,3,4,5,6} with equal distribution.
int rand7() {
i = 5 * rand5() + rand5();
max = 25;
//i is uniform among {0 ... max-1}
while(i < max%7) {
//i is uniform among {0 ... (max%7 - 1)}
i *= 5;
i += rand5(); //i is uniform {0 ... (((max%7)*5) - 1)}
max %= 7;
max *= 5; //once again, i is uniform among {0 ... max-1}
}
return(i%7);
}
We're keeping track of the largest value that the loop can make in the variable
Edit: Expect number of times to call rand5() is x in this equation:
x = 2 * 21/25
+ 3 * 4/25 * 14/20
+ 4 * 4/25 * 6/20 * 28/30
+ 5 * 4/25 * 6/20 * 2/30 * 7/10
+ 6 * 4/25 * 6/20 * 2/30 * 3/10 * 14/15
+ (6+x) * 4/25 * 6/20 * 2/30 * 3/10 * 1/15
x = about 2.21 calls to rand5()
int randbit( void )
{
while( 1 )
{
int r = rand5();
if( r <= 4 ) return(r & 1);
}
}
int randint( int nbits )
{
int result = 0;
while( nbits-- )
{
result = (result<<1) | randbit();
}
return( result );
}
int rand7( void )
{
while( 1 )
{
int r = randint( 3 ) + 1;
if( r <= 7 ) return( r );
}
}
rand7() = (rand5()+rand5()+rand5()+rand5()+rand5()+rand5()+rand5())%7+1
Edit: That doesn't quite work. It's off by about 2 parts in 1000 (assuming a perfect rand5). The buckets get:
By switching to a sum of
n Error%
10 +/- 1e-3,
12 +/- 1e-4,
14 +/- 1e-5,
16 +/- 1e-6,
...
28 +/- 3e-11
seems to gain an order of magnitude for every 2 added
BTW: the table of errors above was not generated via sampling but by the following recurrence relation:
p[1,1] ... p[5,1] = 1
p[6,1] ... p[7,1] = 0
p[1,n] = p[7,n-1] + p[6,n-1] + p[5,n-1] + p[4,n-1] + p[3,n-1]
p[2,n] = p[1,n-1] + p[7,n-1] + p[6,n-1] + p[5,n-1] + p[4,n-1]
p[3,n] = p[2,n-1] + p[1,n-1] + p[7,n-1] + p[6,n-1] + p[5,n-1]
p[4,n] = p[3,n-1] + p[2,n-1] + p[1,n-1] + p[7,n-1] + p[6,n-1]
p[5,n] = p[4,n-1] + p[3,n-1] + p[2,n-1] + p[1,n-1] + p[7,n-1]
p[6,n] = p[5,n-1] + p[4,n-1] + p[3,n-1] + p[2,n-1] + p[1,n-1]
p[7,n] = p[6,n-1] + p[5,n-1] + p[4,n-1] + p[3,n-1] + p[2,n-1]
int ans = 0;
while (ans == 0)
{
for (int i=0; i<3; i++)
{
while ((r = rand5()) == 3){};
ans += (r < 3) >> i
}
}
The following produces a uniform distribution on {1, 2, 3, 4, 5, 6, 7} using a random number generator producing a uniform distribution on {1, 2, 3, 4, 5}. The code is messy, but the logic is clear.
public static int random_7(Random rg) {
int returnValue = 0;
while (returnValue == 0) {
for (int i = 1; i <= 3; i++) {
returnValue = (returnValue << 1) + SimulateFairCoin(rg);
}
}
return returnValue;
}
private static int SimulateFairCoin(Random rg) {
while (true) {
int flipOne = random_5_mod_2(rg);
int flipTwo = random_5_mod_2(rg);
if (flipOne == 0 && flipTwo == 1) {
return 0;
}
else if (flipOne == 1 && flipTwo == 0) {
return 1;
}
}
}
private static int random_5_mod_2(Random rg) {
return random_5(rg) % 2;
}
private static int random_5(Random rg) {
return rg.Next(5) + 1;
}
If we consider the additional constraint of trying to give the most efficient answer i.e one that given an input stream,
The simplest way to analyse this is to treat the streams I and
Then if we take a section of the input stream of length
So this gives a value for
The difficulty with the above analysis is the equation
The question is how close to the best possible value of m (log5/log7) can be attain. For example when this number approaches close to an integer can we find a way to achieve this exact integral number of output values?
If
If we let
Then
If we just keep substituting we obtain:
T7(5^m) = n0x7^n0/5^m + n1x7^n1/5^m + ... + nrx7^nr/5^m = (n0x7^n0 + n1x7^n1 + ... + nrx7^nr)/5^m
Hence
L(m)=T7(5^m)=(n0x7^n0 + n1x7^n1 + ... + nrx7^nr)/(7^n0+7^n1+7^n2+...+7^nr+s)
Another way of putting this is:
If 5^m has 7-ary representation `a0+a1*7 + a2*7^2 + a3*7^3+...+ar*7^r
Then L(m) = (a1*7 + 2a2*7^2 + 3a3*7^3+...+rar*7^r)/(a0+a1*7 + a2*7^2 + a3*7^3+...+ar*7^r)
The best possible case is my original one above where
Then
The worst case is when we can only find k and s.t 5^m = kx7+s.
Then T7(5^m) = 1x(k.7)/(k.7+s) = 1+o(1)
Other cases are somewhere inbetween. It would be interesting to see how well we can do for very large m, i.e. how good can we get the error term:
T7(5^m) = m (Log5/Log7)+e(m)
It seems impossible to achieve
The whole thing then rests on the distribution of the 7-ary digits of
I'm sure there is a lot of theory out there that covers this I may have a look and report back at some point.
7 can be represented in a sequence of 3 bits
Use rand(5) to randomly fill each bit with 0 or 1.
if the result is 1 or 2, fill the bit with 0
This way we can fill 3 bits randomly with 0/1 and thus get a number from 1-7.
public static int random_7() {
int returnValue = 0;
while (returnValue == 0) {
for (int i = 1; i <= 3; i++) {
returnValue = (returnValue << 1) + random_5_output_2();
}
}
return returnValue;
}
private static int random_5_output_2() {
while (true) {
int flip = random_5();
if (flip < 3) {
return 0;
}
else if (flip > 3) {
return 1;
}
}
}
Why not do it simple?
int random7() {
return random5() + (random5() % 3);
}
The chances of getting 1 and 7 in this solution is lower due to the modulo, however, if you just want a quick and readable solution, this is the way to go.
Are homework problems allowed here?
This function does crude "base 5" math to generate a number between 0 and 6.
function rnd7() {
do {
r1 = rnd5() - 1;
do {
r2=rnd5() - 1;
} while (r2 > 1);
result = r2 * 5 + r1;
} while (result > 6);
return result + 1;
}
Here is a working Python implementation of Adam's answer.
import random
def rand5():
return random.randint(1, 5)
def rand7():
while True:
r = 5 * (rand5() - 1) + rand5()
#r is now uniformly random between 1 and 25
if (r <= 21):
break
#result is now uniformly random between 1 and 7
return r % 7 + 1
I like to throw algorithms I'm looking at into Python so I can play around with them, thought I'd post it here in the hopes that it is useful to someone out there, not that it took long to throw together.
Assuming that
from random import randint
sum = 7
while sum >= 7:
first = randint(0,5)
toadd = 9999
while toadd>1:
toadd = randint(0,5)
if toadd:
sum = first+5
else:
sum = first
assert 7>sum>=0
print sum
The premise behind Adam Rosenfield's correct answer is:
When n equals 2, you have 4 throw-away possibilities: y = {22, 23, 24, 25}. If you use n equals 6, you only have 1 throw-away: y = {15625}.
5^6 = 15625
You call rand5 more times. However, you have a much lower chance of getting a throw-away value (or an infinite loop). If there is a way to get no possible throw-away value for y, I haven't found it yet.
Here's my answer:
static struct rand_buffer {
unsigned v, count;
} buf2, buf3;
void push (struct rand_buffer *buf, unsigned n, unsigned v)
{
buf->v = buf->v * n + v;
++buf->count;
}
#define PUSH(n, v) push (&buf##n, n, v)
int rand16 (void)
{
int v = buf2.v & 0xf;
buf2.v >>= 4;
buf2.count -= 4;
return v;
}
int rand9 (void)
{
int v = buf3.v % 9;
buf3.v /= 9;
buf3.count -= 2;
return v;
}
int rand7 (void)
{
if (buf3.count >= 2) {
int v = rand9 ();
if (v < 7)
return v % 7 + 1;
PUSH (2, v - 7);
}
for (;;) {
if (buf2.count >= 4) {
int v = rand16 ();
if (v < 14) {
PUSH (2, v / 7);
return v % 7 + 1;
}
PUSH (2, v - 14);
}
// Get a number between 0 & 25
int v = 5 * (rand5 () - 1) + rand5 () - 1;
if (v < 21) {
PUSH (3, v / 7);
return v % 7 + 1;
}
v -= 21;
PUSH (2, v & 1);
PUSH (2, v >> 1);
}
}
It's a little more complicated than others, but I believe it minimises the calls to rand5. As with other solutions, there's a small probability that it could loop for a long time.
As long as there aren't seven possibilities left to choose from, draw another random number, which multiplies the number of possibilities by five. In Perl:
$num = 0;
$possibilities = 1;
sub rand7
{
while( $possibilities < 7 )
{
$num = $num * 5 + int(rand(5));
$possibilities *= 5;
}
my $result = $num % 7;
$num = int( $num / 7 );
$possibilities /= 7;
return $result;
}
I know it has been answered, but is this seems to work ok, but I can not tell you if it has a bias. My 'testing' suggests it is, at least, reasonable.
My (naive?) idea is this:
Accumulate rand5's until there is enough random bits to make a rand7. This takes at most 2 rand5's. To get the rand7 number I use the accumulated value mod 7.
To avoid the accumulator overflowing, and since the accumulator is mod 7 then I take the mod 7 of the accumulator:
(5a + rand5) % 7 = (k*7 + (5a%7) + rand5) % 7 = ( (5a%7) + rand5) % 7
The rand7() function follows:
(I let the range of rand5 be 0-4 and rand7 is likewise 0-6.)
int rand7(){
static int a=0;
static int e=0;
int r;
a = a * 5 + rand5();
e = e + 5; // added 5/7ths of a rand7 number
if ( e<7 ){
a = a * 5 + rand5();
e = e + 5; // another 5/7ths
}
r = a % 7;
e = e - 7; // removed a rand7 number
a = a % 7;
return r;
}
Edit: Added results for 100 million trials.
'Real' rand functions mod 5 or 7
rand5 : avg=1.999802 0:20003944 1:19999889 2:20003690 3:19996938 4:19995539 rand7 : avg=3.000111 0:14282851 1:14282879 2:14284554 3:14288546 4:14292388 5:14288736 6:14280046
My rand7
Average looks ok and number distributions look ok too.
randt : avg=3.000080 0:14288793 1:14280135 2:14287848 3:14285277 4:14286341 5:14278663 6:14292943
Simple and efficient:
int rand7 ( void )
{
return 4; // this number has been calculated using
// rand5() and is in the range 1..7
}
I don't like ranges starting from 1, so I'll start from 0 :-)
unsigned rand5()
{
return rand() % 5;
}
unsigned rand7()
{
int r;
do
{
r = rand5();
r = r * 5 + rand5();
r = r * 5 + rand5();
r = r * 5 + rand5();
r = r * 5 + rand5();
r = r * 5 + rand5();
} while (r > 15623);
return r / 2232;
}
in php
function rand1to7() {
do {
$output_value = 0;
for ($i = 0; $i < 28; $i++) {
$output_value += rand1to5();
}
while ($output_value != 140);
$output_value -= 12;
return floor($output_value / 16);
}
loops to produce a random number between 16 and 127, divides by sixteen to create a float between 1 and 7.9375, then rounds down to get an int between 1 and 7. if I am not mistaken, there is a 16/112 chance of getting any one of the 7 outcomes.
By using a
Both these problems are an issue with the simplistic
import random
x = []
for i in range (0,7):
x.append (0)
t = 0
tt = 0
for i in range (0,700000):
########################################
##### qq.py #####
r = int (random.random () * 5)
t = (t + r) % 7
########################################
##### qq_notsogood.py #####
#r = 20
#while r > 6:
#r = int (random.random () * 5)
#r = r + int (random.random () * 5)
#t = r
########################################
x[t] = x[t] + 1
tt = tt + 1
high = x[0]
low = x[0]
for i in range (0,7):
print "%d: %7d %.5f" % (i, x[i], 100.0 * x[i] / tt)
if x[i] < low:
low = x[i]
if x[i] > high:
high = x[i]
diff = high - low
print "Variation = %d (%.5f%%)" % (diff, 100.0 * diff / tt)
And this output shows the results:
pax$ python qq.py
0: 99908 14.27257
1: 100029 14.28986
2: 100327 14.33243
3: 100395 14.34214
4: 99104 14.15771
5: 99829 14.26129
6: 100408 14.34400
Variation = 1304 (0.18629%)
pax$ python qq.py
0: 99547 14.22100
1: 100229 14.31843
2: 100078 14.29686
3: 99451 14.20729
4: 100284 14.32629
5: 100038 14.29114
6: 100373 14.33900
Variation = 922 (0.13171%)
pax$ python qq.py
0: 100481 14.35443
1: 99188 14.16971
2: 100284 14.32629
3: 100222 14.31743
4: 99960 14.28000
5: 99426 14.20371
6: 100439 14.34843
Variation = 1293 (0.18471%)
A simplistic
pax$ python qq_notsogood.py
0: 31756 4.53657
1: 63304 9.04343
2: 95507 13.64386
3: 127825 18.26071
4: 158851 22.69300
5: 127567 18.22386
6: 95190 13.59857
Variation = 127095 (18.15643%)
pax$ python qq_notsogood.py
0: 31792 4.54171
1: 63637 9.09100
2: 95641 13.66300
3: 127627 18.23243
4: 158751 22.67871
5: 126782 18.11171
6: 95770 13.68143
Variation = 126959 (18.13700%)
pax$ python qq_notsogood.py
0: 31955 4.56500
1: 63485 9.06929
2: 94849 13.54986
3: 127737 18.24814
4: 159687 22.81243
5: 127391 18.19871
6: 94896 13.55657
Variation = 127732 (18.24743%)
And, on the advice of Nixuz, I've cleaned the script up so you can just extract and use the
import random
# rand5() returns 0 through 4 inclusive.
def rand5():
return int (random.random () * 5)
# rand7() generator returns 0 through 6 inclusive (using rand5()).
def rand7():
rand7ret = 0
while True:
rand7ret = (rand7ret + rand5()) % 7
yield rand7ret
# Number of test runs.
count = 700000
# Work out distribution.
distrib = [0,0,0,0,0,0,0]
rgen =rand7()
for i in range (0,count):
r = rgen.next()
distrib[r] = distrib[r] + 1
# Print distributions and calculate variation.
high = distrib[0]
low = distrib[0]
for i in range (0,7):
print "%d: %7d %.5f" % (i, distrib[i], 100.0 * distrib[i] / count)
if distrib[i] < low:
low = distrib[i]
if distrib[i] > high:
high = distrib[i]
diff = high - low
print "Variation = %d (%.5f%%)" % (diff, 100.0 * diff / count)
This answer is more an experiment in obtaining the most entropy possible from the Rand5 function. t is therefore somewhat unclear and almost certainly a lot slower than other implementations.
Assuming the uniform distribution from 0-4 and resulting uniform distribution from 0-6:
public class SevenFromFive
{
public SevenFromFive()
{
// this outputs a uniform ditribution but for some reason including it
// screws up the output distribution
// open question Why?
this.fifth = new ProbabilityCondensor(5, b => {});
this.eigth = new ProbabilityCondensor(8, AddEntropy);
}
private static Random r = new Random();
private static uint Rand5()
{
return (uint)r.Next(0,5);
}
private class ProbabilityCondensor
{
private readonly int samples;
private int counter;
private int store;
private readonly Action<bool> output;
public ProbabilityCondensor(int chanceOfTrueReciprocal,
Action<bool> output)
{
this.output = output;
this.samples = chanceOfTrueReciprocal - 1;
}
public void Add(bool bit)
{
this.counter++;
if (bit)
this.store++;
if (counter == samples)
{
bool? e;
if (store == 0)
e = false;
else if (store == 1)
e = true;
else
e = null;// discard for now
counter = 0;
store = 0;
if (e.HasValue)
output(e.Value);
}
}
}
ulong buffer = 0;
const ulong Mask = 7UL;
int bitsAvail = 0;
private readonly ProbabilityCondensor fifth;
private readonly ProbabilityCondensor eigth;
private void AddEntropy(bool bit)
{
buffer <<= 1;
if (bit)
buffer |= 1;
bitsAvail++;
}
private void AddTwoBitsEntropy(uint u)
{
buffer <<= 2;
buffer |= (u & 3UL);
bitsAvail += 2;
}
public uint Rand7()
{
uint selection;
do
{
while (bitsAvail < 3)
{
var x = Rand5();
if (x < 4)
{
// put the two low order bits straight in
AddTwoBitsEntropy(x);
fifth.Add(false);
}
else
{
fifth.Add(true);
}
}
// read 3 bits
selection = (uint)((buffer & Mask));
bitsAvail -= 3;
buffer >>= 3;
if (selection == 7)
eigth.Add(true);
else
eigth.Add(false);
}
while (selection == 7);
return selection;
}
}
The number of bits added to the buffer per call to Rand5 is currently 4/5 * 2 so 1.6. If the 1/5 probability value is included that increases by 0.05 so 1.65 but see the comment in the code where I have had to disable this.
Bits consumed by call to Rand7 = 3 + 1/8 * (3 + 1/8 * (3 + 1/8 * (...
By extracting information from the sevens I reclaim 1/8*1/7 bits per call so about 0.018
This gives a net consumption 3.4 bits per call which means the ratio is 2.125 calls to Rand5 for every Rand7. The optimum should be 2.1.
I would imagine this approach is
There are elegant algorithms cited above, but here's one way to approach it, although it might be roundabout. I am assuming values generated from 0.
R2 = random number generator giving values less than 2 (sample space = {0, 1})
In order to generate R8 from R2, you will run R2 thrice, and use the combined result of all 3 runs as a binary number with 3 digits. Here are the range of values when R2 is ran thrice:
0 0 0 --> 0
Now to generate R7 from R8, we simply run R7 again if it returns 7:
int R7() {
do {
x = R8();
} while (x > 6)
return x;
}
The roundabout solution is to generate R2 from R5 (just like we generated R7 from R8), then R8 from R2 and then R7 from R8.
The function you need is *rand1_7()*, I wrote rand1_5() so that you can test it and plot it.
import numpy
def rand1_5():
return numpy.random.randint(5)+1
def rand1_7():
q = 0
for i in xrange(7): q+= rand1_5()
return q%7 + 1
There you go, uniform distribution and zero rand5 calls.
def rand7:
seed += 1
if seed >= 7:
seed = 0
yield seed
Need to set seed beforehand.
Here's a solution that fits entirely within integers and is within about 4% of optimal (i.e. uses 1.26 random numbers in {0..4} for every one in {0..6}). The code's in Scala, but the math should be reasonably clear in any language: you take advantage of the fact that 7^9 + 7^8 is very close to 5^11. So you pick an 11 digit number in base 5, and then interpret it as a 9 digit number in base 7 if it's in range (giving 9 base 7 numbers), or as an 8 digit number if it's over the 9 digit number, etc.:
abstract class RNG {
def apply(): Int
}
class Random5 extends RNG {
val rng = new scala.util.Random
var count = 0
def apply() = { count += 1 ; rng.nextInt(5) }
}
class FiveSevener(five: RNG) {
val sevens = new Array[Int](9)
var nsevens = 0
val to9 = 40353607;
val to8 = 5764801;
val to7 = 823543;
def loadSevens(value: Int, count: Int) {
nsevens = 0;
var remaining = value;
while (nsevens < count) {
sevens(nsevens) = remaining % 7
remaining /= 7
nsevens += 1
}
}
def loadSevens {
var fivepow11 = 0;
var i=0
while (i<11) { i+=1 ; fivepow11 = five() + fivepow11*5 }
if (fivepow11 < to9) { loadSevens(fivepow11 , 9) ; return }
fivepow11 -= to9
if (fivepow11 < to8) { loadSevens(fivepow11 , 8) ; return }
fivepow11 -= to8
if (fivepow11 < 3*to7) loadSevens(fivepow11 % to7 , 7)
else loadSevens
}
def apply() = {
if (nsevens==0) loadSevens
nsevens -= 1
sevens(nsevens)
}
}
If you paste a test into the interpreter (REPL actually), you get:
scala> val five = new Random5
five: Random5 = Random5@e9c592
scala> val seven = new FiveSevener(five)
seven: FiveSevener = FiveSevener@143c423
scala> val counts = new Array[Int](7)
counts: Array[Int] = Array(0, 0, 0, 0, 0, 0, 0)
scala> var i=0 ; while (i < 100000000) { counts( seven() ) += 1 ; i += 1 }
i: Int = 100000000
scala> counts
res0: Array[Int] = Array(14280662, 14293012, 14281286, 14284836, 14287188,
14289332, 14283684)
scala> five.count
res1: Int = 125902876
The distribution is nice and flat (within about 10k of 1/7 of 10^8 in each bin, as expected from an approximately-Gaussian distribution).
just scale your output from your first function
0) you have a number in range 1-5
1) subtract 1 to make it in range 0-4
2) multiply by (7-1)/(5-1) to make it in range 0-6
3) add 1 to increment the range: Now your result is in between 1-7
int getOneToSeven(){
int added = 0;
for(int i = 1; i<=7; i++){
added += getOneToFive();
}
return (added)%7+1;
}
rand25() =5*(rand5()-1) + rand5()
rand7() {
while(true) {
int r = rand25();
if (r < 21) return r%3;
}
}
Why this works: probability that the loop will run forever is 0.
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead?
|
The code:
count = 0
oldcount = 0
for char in inwords:
if char == " ":
anagramlist.append(inwords[oldcount, count])
oldcount = count
count = 0
else:
count += 1
the error:
Traceback (most recent call last):
File "C:/Users/Knowhaw/Desktop/Python Programs/Anagram solver/HTS anagram.py", line 14,
in <module>
anagramlist.append(inwords[oldcount, count])
TypeError: string indices must be integers
what the hell is going on? count and oldcount are obviously ints, yet the error says they aren't
I can even write
anagramlist.append(inwords[int(oldcount), int(count)])
and get the same error
|
Python 3 OOP Part 3 - Delegation: Composition and Inheritance
Previous post
The Delegation Run
If classes are objects what is the difference between types and instances?
When I talk about “my cat” I am referring to a concrete instance of the “cat” concept, which is a subtype of “animal”. So, despite being both objects, while types can be specialized, instances cannot.
Usually an object B is said to be a specialization of an object A when:
B has all the features of A
B can provide new features
B can perform some or all the tasks performed by A in a different way
Those targets are very general and valid for any system and the key to achieve them with the maximum reuse of already existing components is delegation. Delegation means that an object shall perform only what it knows best, and leave the rest to other objects.
Delegation can be implemented with two different mechanisms: composition and inheritance. Sadly, very often only inheritance is listed among the pillars of OOP techniques, forgetting that it is an implementation of the more generic and fundamental mechanism of delegation; perhaps a better nomenclature for the two techniques could be explicit delegation (composition) and implicit delegation (inheritance).
Please note that, again, when talking about composition and inheritance we are talking about focusing on a behavioural or structural delegation. Another way to think about the difference between composition and inheritance is to consider if the object knows who can satisfy your request or if the object is the one that satisfy the request.
Please, please, please do not forget composition: in many cases, composition can lead to simpler systems, with benefits on maintainability and changeability.
Usually composition is said to be a very generic technique that needs no special syntax, while inheritance and its rules are strongly dependent on the language of choice. Actually, the strong dynamic nature of Python softens the boundary line between the two techniques.
Inheritance Now
In Python a class can be declared as an extension of one or more different classes, through the class inheritance mechanism. The child class (the one that inherits) has the same internal structure of the parent class (the one that is inherited), and for the case of multiple inheritance the language has very specific rules to manage possible conflicts or redefinitions among the parent classes. A very simple example of inheritance is
1 2
class SecurityDoor(Door):
pass
where we declare a new class SecurityDoor that, at the moment, is a perfect copy of the Door class. Let us investigate what happens when we access attributes and methods. First we instance the class
1
>>> sdoor = SecurityDoor(1, 'closed')
The first check we can do is that class attributes are still global and shared
1 2 3 4
>>> SecurityDoor.colour is Door.colour
True
>>> sdoor.colour is Door.colour
True
This shows us that Python tries to resolve instance members not only looking into the class the instance comes from, but also investigating the parent classes. In this case sdoor.colour becomes SecurityDoor.colour, that in turn becomes Door.colour. SecurityDoor is a Door.
If we investigate the content of __dict__ we can catch a glimpse of the inheritance mechanism in action
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> sdoor.__dict__
{'number': 1, 'status': 'closed'}
>>> sdoor.__class__.__dict__
mappingproxy({'__doc__': None, '__module__': '__main__'})
>>> Door.__dict__
mappingproxy({'__dict__': <attribute '__dict__' of 'Door' objects>,
'colour': 'yellow',
'open': <function Door.open at 0xb687e224>,
'__init__': <function Door.__init__ at 0xb687e14c>,
'__doc__': None,
'close': <function Door.close at 0xb687e1dc>,
'knock': <classmethod object at 0xb67ff6ac>,
'__weakref__': <attribute '__weakref__' of 'Door' objects>,
'__module__': '__main__',
'paint': <classmethod object at 0xb67ff6ec>})
As you can see the content of __dict__ for SecurityDoor is very narrow compared to that of Door. The inheritance mechanism takes care of the missing elements by climbing up the classes tree. Where does Python get the parent classes? A class always contains a __bases__ tuple that lists them
1 2
>>> SecurityDoor.__bases__
(<class '__main__.Door'>,)
So an example of what Python does to resolve a class method call through the inheritance tree is
1 2 3 4
>>> sdoor.__class__.__bases__[0].__dict__['knock'].__get__(sdoor)
<bound method type.knock of <class '__main__.SecurityDoor'>>
>>> sdoor.knock
<bound method type.knock of <class '__main__.SecurityDoor'>>
Please note that this is just an example that does not consider multiple inheritance.
Let us try now to override some methods and attributes. In Python you can override (redefine) a parent class member simply by redefining it in the child class.
1 2 3 4 5 6 7
class SecurityDoor(Door):
colour = 'gray'
locked = True
def open(self):
if not self.locked:
self.status = 'open'
As you can forecast, the overridden members now are present in the __dict__ of the SecurityDoor class
1 2 3 4 5 6
>>> SecurityDoor.__dict__
mappingproxy({'__doc__': None,
'__module__': '__main__',
'open': <function SecurityDoor.open at 0xb6fcf89c>,
'colour': 'gray',
'locked': True})
So when you override a member, the one you put in the child class is used instead of the one in the parent class simply because the former is found before the latter while climbing the class hierarchy. This also shows you that Python does not implicitly call the parent implementation when you override a method. So, overriding is a way to block implicit delegation.
If we want to call the parent implementation we have to do it explicitly. In the former example we could write
1 2 3 4 5 6 7 8
class SecurityDoor(Door):
colour = 'gray'
locked = True
def open(self):
if self.locked:
return
Door.open(self)
You can easily test that this implementation is working correctly.
1 2 3 4 5 6 7 8 9 10
>>> sdoor = SecurityDoor(1, 'closed')
>>> sdoor.status
'closed'
>>> sdoor.open()
>>> sdoor.status
'closed'
>>> sdoor.locked = False
>>> sdoor.open()
>>> sdoor.status
'open'
This form of explicit parent delegation is heavily discouraged, however.
The first reason is because of the very high coupling that results from explicitly naming the parent class again when calling the method. Coupling, in the computer science lingo, means to link two parts of a system, so that changes in one of them directly affect the other one, and is usually avoided as much as possible. In this case if you decide to use a new parent class you have to manually propagate the change to every method that calls it. Moreover, since in Python the class hierarchy can be dynamically changed (i.e. at runtime), this form of explicit delegation could be not only annoying but also wrong.
The second reason is that in general you need to deal with multiple inheritance, where you do not know a priori which parent class implements the original form of the method you are overriding.
To solve these issues, Python supplies the super() built-in function, that climbs the class hierarchy and returns the correct class that shall be called. The syntax for calling super() is
1 2 3 4 5 6 7 8
class SecurityDoor(Door):
colour = 'gray'
locked = True
def open(self):
if self.locked:
return
super().open(self)
The output of super() is not exactly the Door class. It returns a super object which representation is <super: <class 'SecurityDoor'>, <SecurityDoor object>>. This object however acts like the parent class, so you can safely ignore its custom nature and use it just like you would do with the Door class in this case.
Enter the Composition
Composition means that an object knows another object, and explicitly delegates some tasks to it. While inheritance is implicit, composition is explicit: in Python, however, things are far more interesting than this =).
First of all let us implement classic composition, which simply makes an object part of the other as an attribute
1 2 3 4 5 6 7 8 9 10 11 12 13 14
class SecurityDoor:
colour = 'gray'
locked = True
def __init__(self, number, status):
self.door = Door(number, status)
def open(self):
if self.locked:
return
self.door.open()
def close(self):
self.door.close()
The primary goal of composition is to relax the coupling between objects. This little example shows that now SecurityDoor is an object and no more a Door, which means that the internal structure of Door is not copied. For this very simple example both Door and SecurityDoor are not big classes, but in a real system objects can very complex; this means that their allocation consumes a lot of memory and if a system contains thousands or millions of objects that could be an issue.
The composed SecurityDoor has to redefine the colour attribute since the concept of delegation applies only to methods and not to attributes, doesn’t it?
Well, no. Python provides a very high degree of indirection for objects manipulation and attribute access is one of the most useful. As you already discovered, accessing attributes is ruled by a special method called __getattribute__() that is called whenever an attribute of the object is accessed. Overriding __getattribute__(), however, is overkill; it is a very complex method, and, being called on every attribute access, any change makes the whole thing slower.
The method we have to leverage to delegate attribute access is __getattr__(), which is a special method that is called whenever the requested attribute is not found in the object. So basically it is the right place to dispatch all attribute and method access our object cannot handle. The previous example becomes
1 2 3 4 5 6 7 8 9 10 11 12 13
class SecurityDoor:
locked = True
def __init__(self, number, status):
self.door = Door(number, status)
def open(self):
if self.locked:
return
self.door.open()
def __getattr__(self, attr):
return getattr(self.door, attr)
Using __getattr__() blends the separation line between inheritance and composition since after all the former is a form of automatic delegation of every member access.
1 2 3 4 5 6
class ComposedDoor:
def __init__(self, number, status):
self.door = Door(number, status)
def __getattr__(self, attr):
return getattr(self.door, attr)
As this last example shows, delegating every member access through __getattr__() is very simple. Pay attention to getattr() which is different from __getattr__(). The former is a built-in that is equivalent to the dotted syntax, i.e. getattr(obj, 'someattr') is the same as obj.someattr, but you have to use it since the name of the attribute is contained in a string.
Composition provides a superior way to manage delegation since it can selectively delegate the access, even mask some attributes or methods, while inheritance cannot. In Python you also avoid the memory problems that might arise when you put many objects inside another; Python handles everything through its reference, i.e. through a pointer to the memory position of the thing, so the size of an attribute is constant and very limited.
Movie Trivia
Section titles come from the following movies: The Cannonball Run (1981), Apocalypse Now (1979), Enter the Dragon (1973).
Sources
You will find a lot of documentation in this Reddit post. Most of the information contained in this series come from those sources.
|
El lenguaje Python
Acerca de Python
Python es un lenguaje de programación multipropósito de alto nivel Su filosofía de diseño enfatiza la productividad del programador y la legibilidad del código. Tiene un núcleo sintáctico minimalista con unos pocos comandos básicos y simple semántica, pero además tiene una enorme y variada librería estándar, que incluye una Interfaz de Programación de Aplicaciones (API) API para muchas de las funciones en el nivel del sistema operativo (OS). El código Python, aunque minimalista, define objetos incorporados como listas enlazadas (list), tuplas (tuple), tablas hash (dict), y enteros de longitud arbitraria (long).
Python soporta múltiples paradigmas de programación, incluyendo programación orientada a objetos (class), programación imperativa (def) y funcional (lambda). Python tiene un sistema de tipado dinámico y manejo automatizado de memoria utilizando conteo de referencias (similar a Perl, Ruby y Scheme).
Python fue publicado por primera vez por Guido Van Rossum en 1991. El lenguaje tiene un modelo abierto de desarrollo basado en la comunidad administrado por la organización sin fines de lucro Python Software Foundation. Existen varios intérpretes y compiladores que implementan el lenguaje Python, incluyendo uno en Java (Jython) pero, en esta corta revisión, vamos a centrarnos en la implementación en C creada por Guido.
Puedes encontrar varios tutoriales, la documentación oficial y la referencia de las librerías del lenguaje en el sitio web oficial de Python. [python]
Puedes saltarte este capítulo si ya tienes experiencia con el lenguaje Python.
Comenzando
Las distribuciones binarias de web2py para Microsoft Windows o Apple OS X vienen empaquetadas con el intérprete de Python incorporado en el mismo archivo de la distribución.
Puedes iniciarlo en Windows con el siguiente comando (escribe en prompt/consola del DOS):
web2py.exe -S welcome
Sobre Apple OS X, ingresa el siguiente comando en una ventana de terminal (suponiendo que estás en la misma carpeta que web2py.app):
./web2py.app/Contents/MacOS/web2py -S welcome
En una máquina con Linux u otro Unix, probablemente ya tengas instalado Python. Si es así, en el prompt de la shell escribe:
python web2py.py -S welcome
Si no tienes Python 2.5 (o las posteriores 2.x) pre-instalado, tendrás que descargarlo e instalarlo antes de correr web2py.
La opción -S welcome de línea de comandos ordena a web2py que ejecute la shell interactiva como si los comandos se ejecutaran en un controlador para la aplicación welcome, la aplicación de andamiaje de web2py. Esto pone a tu disposición casi todas las clases, objetos y funciones de web2py. Esta es la única diferencia entre la línea de comando interactiva de web2py y la línea de comando normal de Python.
La interfaz administrativa además provee de una shell basada en web para cada aplicación. Puedes acceder a la de la aplicación "welcome" en:
http://127.0.0.1:8000/admin/shell/index/welcome
Puedes seguir todos los ejemplos en este capítulo utilizando una shell normal o la shell para web.
help, dir
El lenguaje Python provee de dos comandos para obtener documentación sobre objetos definidos en el scope actual, tanto los incorporados como los definidos por el usuario.
Podemos pedir ayuda (help) acerca de un objeto, por ejemplo "1":
>>> help(1)
Help on int object:
class int(object)
| int(x[, base]) -> integer
|
| Convert a string or number to an integer, if possible. A floating point
| argument will be truncated towards zero (this does not include a string
| representation of a floating point number!) When converting a string, use
| the optional base. It is an error to supply a base when converting a
| non-string. If the argument is outside the integer range a long object
| will be returned instead.
|
| Methods defined here:
|
| __abs__(...)
| x.__abs__() <==> abs(x)
...
y, como "1" es un entero, obtenemos una descripción de la clase int y de todos sus métodos. Aquí la salida fue truncada porque es realmente larga y detallada.
En forma similar, podemos obtener una lista de métodos del objeto "1" con el comando dir:
>>> dir(1)
['__abs__', ..., '__xor__']
Tipos
Python es un lenguaje de tipado dinámico, o sea que las variables no tienen un tipo y por lo tanto no deben ser declaradas. Los valores, sin embargo, tienen tipo. Puedes consultar a una variable el tipo de valor que contiene:
>>> a = 3
>>> print type(a)
<type 'int'>
>>> a = 3.14
>>> print type(a)
<type 'float'>
>>> a = 'hola Python'
>>> print type(a)
<type 'str'>
Python además incluye, como características nativas, estructuras de datos como listas y diccionarios.
str
Python soporta el uso de dos diversos tipo de cadenas: ASCII y Unicode. Las cadenas ASCII se delimitan por '...', "..." o por '...' o """...""". Las comillas triples delimitan cadenas multilínea. Las cadenas Unicode comienzan con un u seguido por la cadena conteniendo caracteres Unicode. Una cadena Unicode puede convertirse en una cadena ASCII seleccionando un una codificación por ejemplo:
>>> a = 'esta es una cadena ASCII'
>>> b = u'esta es una cadena Unicode'
>>> a = b.encode('utf8')
Al ejecutar estos tres comandos, la a resultante es una cadena ASCII que almacena caracteres codificados con UTF8. Por diseño, web2py utiliza cadenas codificadas con UTF8 internamente.
Además es posible utilizar variables en cadenas de distintas formas:
>>> print 'el número es ' + str(3)
el número es 3
>>> print 'el número es %s' % (3)
el número es 3
>>> print 'el número es %(numero)s' % dict(numero=3)
el número es 3
La última notación es más explícita y menos propensa a errores, y es la recomendada.
Muchos objetos de Pyhton, por ejemplo números, pueden ser serializados en cadenas utilizando str o repr. Estos dos comandos son realmente similares pero producen una salida ligeramente diferente. Por ejemplo:
>>> for i in [3, 'hola']:
print str(i), repr(i)
3 3
hola 'hola'
Para las clases definidas por el usuario, str y repr pueden definirse/redefinirse utilizando los operadores especiales __str__ y __repr__. Estos se describirán básicamente más adelante; para mayor información, consulta la documentación oficial de Python [pydocs]. repr siempre tiene un valor por defecto.
Otra característica importante de una cadena de Python es que, como una lista, es un objeto iterable
>>> for i in 'hola':
print i
h
o
l
a
list
Los métodos principales de una lista de Python son append, insert, y delete:
>>> a = [1, 2, 3]
>>> print type(a)
<type 'list'>
>>> a.append(8)
>>> a.insert(2, 7)
>>> del a[0]
>>> print a
[2, 7, 3, 8]
>>> print len(a)
4
Las listas se pueden cortar (slice):
>>> print a[:3]
[2, 7, 3]
>>> print a[1:]
[7, 3, 8]
>>> print a[-2:]
[3, 8]
y concatenar:
>>> a = [2, 3]
>>> b = [5, 6]
>>> print a + b
[2, 3, 5, 6]
Una lista es iterable; puedes recorrerla en un bucle:
>>> a = [1, 2, 3]
>>> for i in a:
print i
1
2
3
Los elementos de una lista no tienen que ser del mismo tipo; pueden ser de cualquier tipo de objeto de Python.
Hay una situación muy común en la que se puede usar una lista por comprensión o list comprehension. Consideremos el siguiente código:
>>> a = [1,2,3,4,5]
>>> b = []
>>> for x in a:
if x % 2 == 0:
b.append(x * 3)
>>> b
[6, 12]
Este código claramente procesa una lista de ítems, separa y modifica un subconjunto de la lista ingresada y crea una nueva lista resultante, y este código puede ser enteramente reemplazado por la siguiente lista por comprensión:
>>> a = [1,2,3,4,5]
>>> b = [x * 3 for x in a if x % 2 == 0]
>>> b
[6, 12]
tuple
Una tupla es como una lista, pero su tamaño y elementos son inmutables, mientras que en una lista son mutables. Si un elemento de una tupla es un objeto, los atributos del objeto son mutables. Una tupla está delimitada por paréntesis.
>>> a = (1, 2, 3)
Entonces si esto funciona para una lista:
>>> a = [1, 2, 3]
>>> a[1] = 5
>>> print a
[1, 5, 3]
la asignación a un elemento no funciona para una tupla:
>>> a = (1, 2, 3)
>>> print a[1]
2
>>> a[1] = 5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
Una tupla, como en la lista, es un objeto iterable. Nótese que una tupla que consista de un elemento debe incluir una coma al final, como se muestra abajo:
>>> a = (1)
>>> print type(a)
<type 'int'>
>>> a = (1,)
>>> print type(a)
<type 'tuple'>
Las tuplas son realmente útiles para ordenar objetos en grupos eficientemente por su inmutabilidad, y los paréntesis son a veces opcionales:
>>> a = 2, 3, 'hola'
>>> x, y, z = a
>>> print x
2
>>> print z
hola
dict
Un dict (diccionario) de Python es una tabla hash que asocia (map) un objeto-clave a un objeto-valor. Por ejemplo:
>>> a = {'k':'v', 'k2':3}
>>> a['k']
v
>>> a['k2']
3
>>> a.has_key('k')
True
>>> a.has_key('v')
False
Las claves pueden ser de cualquier tipo apto para tabla hash (int, string, o cualquier objeto cuya clase implemente el método __hash__). Los valores pueden ser de cualquier tipo. Las claves y valores diferentes en el mismo diccionario no tienen que ser de un único tipo. Si las claves son caracteres alfanuméricos, el diccionario también se puede declarar con una sintaxis alternativa:
>>> a = dict(k='v', h2=3)
>>> a['k']
v
>>> print a
{'k':'v', 'h2':3}
has_key, keys, values y items son métodos útiles:
>>> a = dict(k='v', k2=3)
>>> print a.keys()
['k', 'k2']
>>> print a.values()
['v', 3]
>>> print a.items()
[('k', 'v'), ('k2', 3)]
El método items produce una lista de tuplas, cada una conteniendo una clave y su valor asociado.
>>> a = [1, 2, 3]
>>> del a[1]
>>> print a
[1, 3]
>>> a = dict(k='v', h2=3)
>>> del a['h2']
>>> print a
{'k':'v'}
Internamente, Python utiliza el operador hash para convertir objetos en enteros, y usa ese entero para determinar dónde almacenar el valor.
>>> hash("hola mundo")
-1500746465
Acerca del espaciado
Python usa espaciado/sangría para delimitar bloques de código. Un bloque de código comienza con una línea que finaliza con dos puntos, y continúa para todas las líneas que tengan igual o mayor espaciado que la próxima línea. Por ejemplo:
>>> i = 0
>>> while i < 3:
>>> print i
>>> i = i + 1
>>>
0
1
2
Es común el uso de cuatro espacios para cada nivel de espaciado o indentation. Es una buena práctica no combinar la tabulación con el espacio, porque puede resultar (invisiblemente) confuso.
for...in
En Python, puedes recorrer objetos iterables en un bucle
>>> a = [0, 1, 'hola', 'python']
>>> for i in a:
print i
0
1
hola
python
Un atajo usual es xrange, que genera un rango iterable sin almacenar la lista entera de elementos.
>>> for i in xrange(0, 4):
print i
0
1
2
3
Esto es equivalente a la sintaxis de C/C++/C#/Java:
for(int i=0; i<4; i=i+1) { print(i); }
Otro comando de utilidad es enumerate, que realiza un conteo mientras avanza el bucle:
>>> a = [0, 1, 'hola', 'python']
>>> for i, j in enumerate(a):
print i, j
0 0
1 1
2 hola
3 python
También hay un keyword range(a, b, c) que devuelve una lista de enteros comenzando con el valor a y con un incremento de c, y que finaliza con el último valor menor a b. Por defecto, a es 0 y c es 1. xrange es similar es similar pero en realidad no genera una lista, sólo un iterator para la lista; que es más apropiado para crear estos bucles.
Se puede salir de un bucle utilizando break
>>> for i in [1, 2, 3]:
print i
break
1
Puedes saltar a la próxima iteración del bucle sin ejecutar todo el bloque de código con continue
>>> for i in [1, 2, 3]:
print i
continue
print 'test'
1
2
3
while
El bucle while en Python opera básicamente como lo hace en otros lenguajes de programación, iterando una cantidad indefinida de veces y comprobando una condición antes de cada iteración. Si la condición es False, el bucle finaliza.
>>> i = 0
>>> while i < 10:
i = i + 1
>>> print i
10
No hay una instrucción especial loop...until en Python.
if...elif...else
>>> for i in range(3):
>>> if i == 0:
>>> print 'cero'
>>> elif i == 1:
>>> print 'uno'
>>> else:
>>> print 'otro'
cero
uno
otro
"elif" significa "else if". Tanto elif como else son partes opcionales. Puede haber más de una elif pero sólo una declaración else. Se pueden crear condicionales complicados utilizando los operadores not, and y or.
>>> for i in range(3):
>>> if i == 0 or (i == 1 and i + 1 == 2):
>>> print '0 or 1'
try...except...else...finally
throw) - perdón, generar - excepciones (Exception):
>>> try:
>>> a = 1 / 0
>>> except Exception, e:
>>> print 'epa: %s' % e
>>> else:
>>> print 'sin problemas aquí'
>>> finally:
>>> print 'listo'
epa: integer division or modulo by zero
listo
Si la excepción se genera (raise), es atrapada por la cláusula except, que es ejecutada; no se ejecuta en cambio la cláusula else. Si no se genera ninguna excepción, la cláusula de except no se ejecuta, pero en cambio la de else sí. La cláusula de finally se ejecuta siempre. Puede haber múltiples cláusulas except para distintas excepciones posibles:
>>> try:
>>> raise SyntaxError
>>> except ValueError:
>>> print 'error en el valor'
>>> except SyntaxError:
>>> print 'error sintáctico'
error sintáctico
Las cláusulas else y finally son opcionales.
Aquí mostramos una lista
BaseException
+-- HTTP (defined by web2py)
+-- SystemExit
+-- KeyboardInterrupt
+-- Exception
+-- GeneratorExit
+-- StopIteration
+-- StandardError
| +-- ArithmeticError
| | +-- FloatingPointError
| | +-- OverflowError
| | +-- ZeroDivisionError
| +-- AssertionError
| +-- AttributeError
| +-- EnvironmentError
| | +-- IOError
| | +-- OSError
| | +-- WindowsError (Windows)
| | +-- VMSError (VMS)
| +-- EOFError
| +-- ImportError
| +-- LookupError
| | +-- IndexError
| | +-- KeyError
| +-- MemoryError
| +-- NameError
| | +-- UnboundLocalError
| +-- ReferenceError
| +-- RuntimeError
| | +-- NotImplementedError
| +-- SyntaxError
| | +-- IndentationError
| | +-- TabError
| +-- SystemError
| +-- TypeError
| +-- ValueError
| | +-- UnicodeError
| | +-- UnicodeDecodeError
| | +-- UnicodeEncodeError
| | +-- UnicodeTranslateError
+-- Warning
+-- DeprecationWarning
+-- PendingDeprecationWarning
+-- RuntimeWarning
+-- SyntaxWarning
+-- UserWarning
+-- FutureWarning
+-- ImportWarning
+-- UnicodeWarning
Para una descripción detallada de cada una, consulta la documentación oficial de Python.
web2py expone sólo una nueva excepción, llamada HTTP. Cuando es generada, hace que el programa devuelva una página de error HTTP (para más sobre este tema consulta el Capítulo 4).
Cualquier objeto puede ser utilizado para generar una excepción, pero es buena práctica generar excepciones con objetos que extienden una de las clases de excepción incorporadas.
def...return
Las funciones se declaran utilizando def. Aquí se muestra una función de Python típica:
>>> def f(a, b):
return a + b
>>> print f(4, 2)
6
No hay necesidad (o forma) de especificar los tipos de los argumentos ni el tipo o tipos devueltos. En este ejemplo, se define una función f para que tome dos argumentos.
Las funciones son la primer característica sintáctica descripta en este capítulo para introducir el concepto de "scope" (alcance/ámbito), o "namespace" (espacio de nombres). En el ejemplo de arriba, los identificadores (identifier) a y b son indefinidos fuera del scope de la función f :
>>> def f(a):
return a + 1
>>> print f(1)
2
>>> print a
Traceback (most recent call last):
File "<pyshell#22>", line 1, in <module>
print a
NameError: name 'a' is not defined
Los identificadores definidos por fuera del scope de una función son accesibles dentro de la función; nótese cómo el identificador a es manejado en el siguiente código:
>>> a = 1
>>> def f(b):
return a + b
>>> print f(1)
2
>>> a = 2
>>> print f(1) # se usa un nuevo valor para a
3
>>> a = 1 # redefine a
>>> def g(b):
a = 2 # crea un nuevo a local
return a + b
>>> print g(2)
4
>>> print a # el a global no ha cambiado
1
Si se modifica a, las siguientes llamadas a la función usarán el nuevo valor del a global porque la definición de la función enlaza la ubicación de almacenamiento del identificador a, no el valor de a mismo al momento de la declaración de la función; sin embargo, si se asigna a a dentro de la función g, la a global no es afectada porque la nueva a local protege el valor global. La referencia del scope externo puede ser utilizada en la creación de "cierres" (closures):
>>> def f(x):
def g(y):
return x * y
return g
>>> duplicador = f(2) # duplicador es una nueva función
>>> triplicador = f(3) # triplicador es una nueva función
>>> cuadruplicador = f(4) # cuadruplicador es una nueva función
>>> print duplicador(5)
10
>>> print triplicador(5)
15
>>> print cuadruplicador(5)
20
La función f crea nuevas funciones; y nótese que el scope del nombre g es enteramente interno a f. Los cierres son extremadamente poderosos.
Los argumentos de función pueden tener valores por defecto, y pueden devolver múltiples resultados:
>>> def f(a, b=2):
return a + b, a - b
>>> x, y = f(5)
>>> print x
7
>>> print y
3
Los argumentos de las funciones pueden pasarse explícitamente por nombre, y esto significa que el orden de los argumentos especificados en la llamada puede ser diferente del orden de los argumentos con los que la función fue definida:
>>> def f(a, b=2):
return a + b, a - b
>>> x, y = f(b=5, a=2)
>>> print x
7
>>> print y
-3
Las funciones también pueden aceptar un número variable de argumentos en tiempo de ejecución:
>>> def f(*a, **b):
return a, b
>>> x, y = f(3, 'hola', c=4, test='mundo')
>>> print x
(3, 'hola')
>>> print y
{'c':4, 'test':'mundo'}
Aquí los argumentos no pasados por nombre (3, 'hola') se almacenan en la tupla a, y los argumentos pasados por nombre (c y test) se almacenan en el diccionario b.
En el caso opuesto, puede pasarse una lista o tupla a una función que requiera una conjunto ordenado de argumentos para que los "abra" (unpack):
>>> def f(a, b):
return a + b
>>> c = (1, 2)
>>> print f(*c)
3
y un diccionario se puede "abrir" para pasar argumentos por nombre:
>>> def f(a, b):
return a + b
>>> c = {'a':1, 'b':2}
>>> print f(**c)
3
lambda
lambda presenta una forma de declarar en forma fácil y abreviada funciones sin nombre:
>>> a = lambda b: b + 2
>>> print a(3)
5
La expresión "lambda [a]:[b]" se lee exactamente como "una función con argumentos [a] que devuelve [b]". La expresión lambda es anónima, pero la función adquiere un nombre al ser asignada a un identificador a. Las reglas de espacios de nombres para def también son igualmente válidas para lambda, y de hecho el código de arriba, con respecto a a es idéntico al de la declaración de la función utilizando def:
>>> def a(b):
return b + 2
>>> print a(3)
5
La única ventaja de lambda es la brevedad; sin embargo, la brevedad puede ser muy conveniente en ciertas ocasiones. Consideremos una función llamada map que aplica una función a todos los ítems en una lista, creando una lista nueva:
>>> a = [1, 7, 2, 5, 4, 8]
>>> map(lambda x: x + 2, a)
[3, 9, 4, 7, 6, 10]
Este código se hubiese duplicado en tamaño si utilizábamos def en lugar de lambda. El problema principal de lambda es que (en la implementación de Python) la sintaxis permite una sólo una expresión simple; aunque, para funciones más largas, puede utilizarse def y el costo extra de proveer un nombre para la función disminuye cuando el tamaño de la función aumenta. Igual que con def, lambda puede utilizarse para "condimentar" una función: las nuevas funciones se pueden crear envolviendo funciones existentes de manera que la nueva función tome un conjunto distinto de argumentos:
>>> def f(a, b): return a + b
>>> g = lambda a: f(a, 3)
>>> g(2)
5
Hay muchas situaciones donde es útil condimentar, pero una de ellas es especialmente a propósito en web2py: el manejo del caché. Supongamos que tenemos una función pesada que comprueba si un argumento es número primo:
def esprimo(numero):
for p in range(2, numero):
if (numero % p) == 0:
return False
return True
Esta función obviamente consume mucho tiempo de proceso.
Supongamos que tenemos una función de caché cache.ram que toma tres argumentos: una clave, una función y una cantidad de segundos.
valor = cache.ram('clave', f, 60)
La primera vez que se llame, llamará a su vez a la función f(), almacenará la salida en un diccionario en memoria (digamos "d"), y lo devolverá de manera que el valor es:
valor = d['clave']=f()
La segunda vez que se llame, si la clave está en el diccionario y no es más antigua del número de segundos especificados (60), devolverá el valor correspondiente sin volver a ejecutar la llamada a la función.
valor = d['clave']
¿Cómo podemos almacenar en el caché la salida de la función esprimo para cualquier valor de entrada? De esta forma:
>>> numero = 7
>>> segundos = 60
>>> print cache.ram(str(numero), lambda: esprimo(numero), segundos)
True
>>> print cache.ram(str(numero), lambda: esprimo(numero), segundos)
True
La salida es siempre la misma, pero la primera vez que se llama a cache.ram, se llama a esprimo; la segunda vez, no.
Las funciones de Python, creadas tanto condefcomo conlambdapermiten refactorizar funciones existentes en términos de un conjunto distinto de argumentos.cache.ramycache.diskson funciones para manejo de caché de web2py.
class
Como Python es un lenguaje de tipado dinámico, las clases y objetos pueden resultar extrañas. De hecho no necesitas definir las variables incluidas (atributos) al declarar una clase, y distintas instancias de la misma clase pueden tener distintos atributos. Los atributos (attribute) se asocian generalmente con la instancia, no la clase (excepto cuando se declaran como atributos de clase o "class attributes", que vienen a ser las "static member variables" de C++/Java).
Aquí se muestra un ejemplo:
>>> class MiClase(object): pass
>>> miinstancia = MiClase()
>>> miinstancia.mivariable = 3
>>> print miinstancia.mivariable
3
Nótese que pass es un comando que no "hace" nada. En este caso se utiliza para definir una clase MiClase que no contiene nada. MiClase() llama al constructor de la clase (en este caso el constructor por defecto) y devuelve un objeto, una instancia de la clase. El (object) en la definición de la clase indica que nuestra clase extiende la clase incorporada object. Esto no es obligatorio, pero se considera una buena práctica.
He aquí una clase más complicada:
>>> class MiClase(object):
>>> z = 2
>>> def __init__(self, a, b):
>>> self.x = a
>>> self.y = b
>>> def sumar(self):
>>> return self.x + self.y + self.z
>>> miinstancia = MiClase(3, 4)
>>> print miinstancia.sumar()
9
Las funciones declaradas adentro de la clase son métodos. Algunos métodos tienen nombres especiales reservados. Por ejemplo, __init__ es el constructor. Todas las variables son variables locales del método exceptuando las variables declaradas fuera de los métodos. Por ejemplo, z es una "variable de clase", que equivale a una "static member variable" de C++ que almacena el mismo valor para toda instancia de la clase.
Hay que tener en cuenta que __init__ toma 3 argumentos y add toma uno, y sin embargo los llamamos con dos y ningún argumento respectivamente. El primer argumento representa, por convención, el nombre local utilizado dentro del método para referirse a el objeto actual, pero podríamos haber utilizado cualquier otro. self cumple el mismo rol que *this en C++ o this en Java, pero self no es una palabra reservada.
Esta sintaxis es necesaria para evitar ambigüedad cuando se declaran clases anidadas, como una clase que es local a un método dentro de otra clase.
Atributos especiales, métodos y operadores
Los atributos de clase, métodos y operadores que comienzan con doble guión bajo (__) están usualmente como privados (por ejemplo para usarse internamente pero no expuestos fuera de la clase) aunque esta convención no está controlada en el intérprete.
Algunos de ellos son palabras reservadas y tienen un significado especial.
Aquí se muestran, como ejemplo, tres de ellos:
__len__
__getitem__
__setitem__
Se pueden usar, por ejemplo, para crear un objeto contenedor que se comporta como una lista:
>>> class MiLista(object):
>>> def __init__(self, *a): self.a = list(a)
>>> def __len__(self): return len(self.a)
>>> def __getitem__(self, i): return self.a[i]
>>> def __setitem__(self, i, j): self.a[i] = j
>>> b = MiLista(3, 4, 5)
>>> print b[1]
4
>>> b.a[1] = 7
>>> print b.a
[3, 7, 5]
Entre otros operadores especiales están __getattr__ y __setattr__, que definen los atributos get y set para la clase, y __sum__ y __sub__, que hacen sobrecarga de operadores aritméticos. Para el uso de estos operadores se pueden consultar textos más avanzados en este tema. Ya mencionamos anteriormente los operadores especiales __str__ y __repr__.
Entrada/salida de archivos
En Python puedes abrir y escribir en un archivo con:
>>> archivo = open('miarchivo.txt', 'w')
>>> archivo.write('hola mundo')
>>> archivo.close()
En forma similar, puedes leer lo escrito en el archivo con:
>>> archivo = open('miarchivo.txt', 'r')
>>> print archivo.read()
hola mundo
Como alternativa, puedes leer en modo binario con "rb", escribir en modo binario con "wb", y abrir el archivo en modo incremental (append) con "a", utilizando la notación estándar de C.
El comando de lectura read toma un argumento opcional, que es el número de byte. Puedes además saltar a cualquier ubicación en el archivo usando seek.
Puedes recuperar lo escrito en el archivo con read
>>> print archivo.seek(5)
>>> print archivo.read()
mundo
y puedes cerrar el archivo con:
>>> archivo.close()
En la distribución estándar de Python, conocida como CPython, las variables son calculadas por referencia ("reference-counting"), incluyendo aquellas que manejan archivos, así que CPython sabe que cuando le conteo por referencia de un archivo abierto disminuye hasta cero, el archivo debería cerrarse y la variable descartada. Sin embargo, en otras implementaciones de Python como PyPy, se usa la recolección de basura en lugar del conteo de referencias, y eso implica que es posible que se acumulen demasiados manejadores de archivos abiertos al mismo tiempo, resultando en un error antes que "gc" tenga la opción cerrarlos y descartarlos a todos. Por eso es mejor cerrar explícitamente los manejadores de archivos cuando ya no se necesitan. web2py provee de dos funciones ayudantes,read_file()ywrite_file()en el espacio de nombres degluon.fileutilsque encapsulan el acceso a los archivos y aseguran que los manejadores de archivos en uso se cierren oportunamente.
Cuando utilizas web2py, no sabes dónde se ubica el directorio actual, porque eso depende de la forma en que se halla configurado el marco de desarrollo. La variablerequest.foldercontiene la ruta (path) a la aplicación actual. Las rutas pueden concatenarse con el comandoos.path.join, del hablamos más adelante.
exec, eval
A diferencia de Java, Python es realmente un lenguaje interpretado. Esto significa que tiene la habilidad de ejecutar comandos de Python almacenados en cadenas. Por ejemplo:
>>> a = "print 'hola mundo'"
>>> exec(a)
'hola mundo'
¿Qué ha ocurrido? La función exec le dice al intérprete que se llame a sí mismo y ejecute le contenido de la cadena pasada como argumento. También es posible ejecutar el contenido de una cadena en el contexto definido por los símbolos de un diccionario:
>>> a = "print b"
>>> c = dict(b=3)
>>> exec(a, {}, c)
3
Aquí el intérprete, cuando ejecuta la cadena a, ve los símbolos definidos en c (b en el ejemplo), pero no ve a c o a. Esto no es equivalente a un entorno restringido, porque exec no impone límites a lo que el código interior pueda hacer; sólo define el conjunto de variables visibles en el código.
Una función relacionada es eval, que hace algo muy parecido a exec, pero espera que el argumento pasado evalúe a un valor dado, y devuelve ese valor.
>>> a = "3*4"
>>> b = eval(a)
>>> print b
12
import
Por ejemplo, si necesitamos utilizar un número aleatorio, podemos hacer:
>>> import random
>>> print random.randint(0, 9)
5
Esto imprime un entero aleatorio entre 0 y 9 (incluyendo al 9), 5 en el ejemplo. La función randint está definida en el módulo random. Además es posible importar un objeto de un módulo en el espacio de nombres actual:
>>> from random import randint
>>> print randint(0, 9)
o importar todo objeto de un módulo en el espacio de nombres actual:
>>> from random import *
>>> print randint(0, 9)
o importar todo en un nuevo espacio de nombres específico:
>>> import random as myrand
>>> print myrand.randint(0, 9)
En adelante, vamos a usar objetos principalmente definidos en módulos como os, sys, datetime, time y cPickle.
Todo objeto de Python es accesible a través de un módulo llamadogluony ese es el tema de capítulos posteriores. Internamente, web2py usa muchos módulos de Python (por ejemplothread), pero sólo en raras ocasiones necesitarás acceder a ellos en forma directa.
En las secciones siguientes describiremos aquellos módulos que son de mayor utilidad.
os
Este módulo provee de una interfaz a la API del sistema operativo. Por ejemplo:
>>> import os
>>> os.chdir('..')
>>> os.unlink('archivo_a_borrar')
Algunas de las funciones deos, comochdir, NO DEBEN usarse en web2py, porque no son aptas para el proceso por hilos ("thread-safe").
os.path.join es de gran utilidad; permite concatenar rutas a directorios y archivos en una forma independiente del sistema operativo:
>>> import os
>>> a = os.path.join('ruta', 'sub_ruta')
>>> print a
ruta/sub_ruta
Las variables de entorno del sistema se pueden examinar con:
>>> print os.environ
que es un diccionario de sólo lectura.
sys
El módulo sys contiene muchas variables y funciones, pero la que más usamos es sys.path. Contiene una lista de rutas (path) donde Python busca los módulos. Cuando intentamos importar un módulo, Python lo busca en todos los directorios listados en sys.path. Si instalas módulos adicionales en alguna ubicación y quieres que Python los encuentre, necesitas añadir (append) en sys.path la ruta o path a esa ubicación.
>>> import sys
>>> sys.path.append('ruta/a/mis/módulos')
Cuando corremos web2py, Python permanece residente en memoria, y se configura un único sys.path, aunque puede haber varios hilos respondiendo a las solicitudes HTTP. Para evitar la pérdida de espacio en memoria (leak), es mejor comprobar que una ruta ya está presente antes de añadirla:
>>> ruta = 'ruta/a/mis/módulos'
>>> if not ruta in sys.path:
sys.path.append(ruta)
datetime
El uso del módulo datetime es más fácil de describir por algunos ejemplos:
>>> import datetime
>>> print datetime.datetime.today()
2008-07-04 14:03:90
>>> print datetime.date.today()
2008-07-04
De vez en cuando puedes necesitar una referencia cronológica para los datos (timestamp) según el tiempo de UTC en lugar de usar la hora local. En ese caso puedes usar la siguiente función:
>>> import datetime
>>> print datetime.datetime.utcnow()
2008-07-04 14:03:90
El módulo datetime contiene varias clases: date (fecha), datetime (fecha y hora), time (hora) y timedelta. La diferencia entre dos objetos date o dos datetime o dos time es un timedelta:
>>> a = datetime.datetime(2008, 1, 1, 20, 30)
>>> b = datetime.datetime(2008, 1, 2, 20, 30)
>>> c = b - a
>>> print c.days
1
En web2py, date y datetime se usan para almacenar los tipos de datos SQL correspondientes cuando se pasan desde o a la base de datos.
time
El módulo time difiere de date o datetime porque representa el tiempo en segundos desde el epoch (comenzando desde 1970)
>>> import time
>>> t = time.time()
1215138737.571
Consulta la documentación de Python para otras funciones de conversión entre tiempo en segundos y tiempo como datetime.
cPickle
Este es un módulo realmente poderoso. Provee de funciones que pueden serializar casi cualquier objeto de Python, incluyendo objetos autoreferidos (self-referential). Por ejemplo, vamos a crear un objeto inusual:
>>> class MiClase(object): pass
>>> miinstancia = MiClase()
>>> miinstancia.x = 'algo'
>>> a = [1 ,2, {'hola':'mundo'}, [3, 4, [miinstancia]]]
y ahora:
>>> import cPickle
>>> b = cPickle.dumps(a)
>>> c = cPickle.loads(b)
En el ejemplo, b is una cadena que representa a, y c es una copia de a generada por la deserialización de b.
cPickle puede además serializar y deserializar desde un archivo:
>>> cPickle.dump(a, open('myfile.pickle', 'wb'))
>>> c = cPickle.load(open('myfile.pickle', 'rb'))
top
|
ReEdit:
You know, I really don't like my answer at all. I voted up the other answer but I liked his original answer because not only was it clean but self explanatory without getting "fancy" which is what I fell victim to:
for row in doc.cssselect('tr'):
for cell in row.cssselect('td'):
if(cel.text_content() != ''):
#do stuff here
there's not much more of an elegant solution.
Original-ish:
You can transform the second for loop as follows:
[cell for cell in row.cssselect if cell.text_content() != '']
and turn it into a list-comprehension. That way you've got a prescreened list. You can take that even farther by looking at the following example:
a = [[1,2],[2,3],[3,4]
newList = [y for x in a for y in x]
which transforms it into [1, 2, 2, 3, 3, 4]. Then you can add in the if statement at the end to screen out values. Hence, you'd reduce that into a single line.
Then again, if you were to look at itertools:
ifilter(lambda x: x.text_content() != '', row.cssselect('td'))
produces an iterator which you can iterate over, skipping all items you don't want.
Edit:
And before I get more downvotes, if you're using python 3.0, filter works the same way. No need to import ifilter.
|
You must give a presentation tomorrow and you haven't prepared any figures yet; you must document your last project and you need to plot your most hairy class hierarchies; you are asked to provide ten slightly different variations of the same picture; you are pathologically unable to put your finger on a mouse and draw anything more complex than a square. In all these cases, don't worry! dot can save your day!
dot?
dot is a tool to generate nice-looking diagrams with a minimum of effort. It's part of GraphViz, an open source project developed at AT&T and released under an MIT license. It is a high-quality and mature product, with very good documentation and support, available on all major platforms, including Unix/Linux, Windows, and Mac. There is an official home page and a supporting mailing list.
dot?
First of all, let me make clear that dot is not just another paint program, nor a vector graphics program. dot is a scriptable, batch-oriented graphing tool; it is to vector drawing programs as LaTeX is to word processors. If you want to control every single pixel in your diagram, or if you are an artistic person who likes to draw free hand, then dot is not for you. dot is a tool for the lazy developer, the one who wants the job done with the minimum effort and without caring too much about the details.
Since dot is not a WYSIWYG tool—even if it comes with a WYSIWYG tool, dotty—it is not primarily an interactive tool. Its strength is the ability to generate diagrams programmatically. To fulfill this aim, dot uses a simple but powerful graph description language. Give dot very high level instructions and it will draw the diagrams for you, taking into account all the low level details. Though you have a large choice of customization options and can control the final output in many ways, it is not at all easy to force dot to produce exactly what you want, down to the pixel.
Linux/Unix System Administration Certification -- Would you like to polish your system administration skills online and receive credit from the University of Illinois? Learn how to administer Linux/Unix systems and gain real experience with a root access account. The four-course series covers the Unix file system, networking, Unix services, and scripting. It's all at the O'Reilly Learning Lab.
Expecting that would mean to fight with the tool. You should think of dot as a kind of smart boy, who likes to do things his own way and who is very good at it, but becomes nervous if the master tries to put too much pressure on him. The right attitude with dot (just as with LaTeX) is to trust it and let it to do the job. At the end, when dot has finished, you can always refine the graph by hand. (dotty, the dot diagram interactive editor, comes with GraphViz and can read and generate dot code.) In most cases, you do not need to do anything manually, since dot works pretty well. The best approach is to customize dot options, so that you can programmatically generate one or one hundred diagrams with the least effort.
dot is especially useful in repetitive and automatic tasks, since it easy to generate dot code. For instance, dot comes in handy for automatic documentation of code. UML tools can also do this work, but dot has an advantage over them in terms of ease of use, a flatter learning curve, and greater flexibility. On top of that, dot is very fast and can generate very complicated diagrams in fractions of second.
dot
dot code has a C-ish syntax and is quite readable even to people who have not read the manual. For instance, this dot script:
graph hello {
// Comment: Hello World from ``dot``
// a graph with a single node Node1
Node1 [label="Hello, World!"]
}
generates the image shown in Figure 1.
Figure 1. "Hello, World!" from GraphViz
Save this code in a file called hello.dot. You can then generate the graph and display it with a simple one-liner:
$ dot hello.dot -Tps | gv -
The -Tps option generates PostScript code, which is then piped to the ghostview utility. I've run my examples on a Linux machine with ghostview installed, but dot works equally well under Windows, so you may trivially adapt the examples.
If you're satisfied with the output, save it to a file:
$ dot hello.dot -Tps -o hello.ps
You'll probably want to tweak the options, for instance adding colors and changing the font size. This is not difficult:
graph hello2 {
// Hello World with nice colors and big fonts
Node1 [label="Hello, World!", color=Blue, fontcolor=Red,
fontsize=24, shape=box]
}
This draws a blue square with a red label, shown in Figure 2.
Figure 2. A stylish greeting
You can use any font or color available to X11.
Editor's note: or presumably to Windows, if you're not running an X server.
dot is quite tolerant: the language is case insensitive and quoting the options color="Blue", shape="box" will work too. Moreover, in order to please C fans, you can use semicolons to terminate statements; dot will ignore them.
dot
A generic dot graph is composed of nodes and edges. Our hello.dot example contains a single node and no edges. Edges enter in the game when there are relationships between nodes, for instance hierarchical relationships as in this example, which produced Figure 3:
digraph simple_hierarchy {
B [label="The boss"] // node B
E [label="The employee"] // node E
B->E [label="commands", fontcolor=darkgreen] // edge B->E
}
Figure 3. A hierarchical relationship
dot is especially good at drawing directed graphs, where there is a natural direction. (GraphViz also includes the similar neato tool to produce undirected graphs). In this example the direction is from the boss, who commands, to the employee, who obeys. Of course dot gives you the freedom to revert social hierarchies, as seen in Figure 4:
digraph revolution {
B [label="The boss"] // node B
E [label="The employee"] // node E
B->E [label="commands", dir=back, fontcolor=red]
// revert arrow direction
}
Figure 4. An inverted hierarchy
Sometimes, you want to put things of the same importance on the same level. Use the rank option, as in the following example, which describes a hierarchy with a boss, two employees, John and Jack, of the same rank, and a lower ranked employee Al who works for John. See Figure 5 for the results.
digraph hierarchy {
nodesep=1.0 // increases the separation between nodes
node [color=Red,fontname=Courier]
edge [color=Blue, style=dashed] //setup options
Boss->{ John Jack } // the boss has two employees
{rank=same; John Jack} //they have the same rank
John -> Al // John has a subordinate
John->Jack [dir=both] // but is still on the same level as Jack
}
Figure 5. A multi-level organizational chart
This example shows a nifty feature of dot: if you forget to give explicit labels, it will use the name of the nodes as default labels. You can also set the default colors and style for nodes and edges respectively. It is even possible to control the separation between (all) nodes by tuning the nodesep option. I'll leave it as an exercise for the reader to see what happens without the rank option (hint: you get a very ugly graph).
dot is quite sophisticated, with dozen of options which you can find in the excellent documentation. In particular, the man page (man dot) is especially useful and well done. The documentation also explains how to draw graphs containing subgraphs. However, those advanced features are outside the scope of this brief article.
We'll discuss another feature instead: the ability to generate output in different formats. Depending on your requirements, different formats can be more or less suitable. For the purpose of generating printed documentation, the PostScript format is quite handy. On the other hand, if you're producing documentation to convert to HTML format and put on a Web page, PNG format can be handy. It is quite trivial to select an output format with the -T output format type flag:
$ dot hello.dot -Tpng -o hello.png
There are many others available formats, including all the common ones such as GIF, JPG, WBMP, FIG and more exotic ones.
dot Code
dot is not a real programming language, but it is pretty easy to interface dot with a real programming language. Bindings exist for many programming languages—including Java, Perl, and Python. A more lightweight alternative is just to generate the dot code from your preferred language. Doing so will allow you to automate the entire graph generation.
Here is a simple Python example using this technique. This example script shows how to draw Python class hierarchies with the least effort; it may help you in documenting your code.
# dot.py
"Require Python 2.3 (or 2.2. with from __future__ import generators)"
def dotcode(cls):
setup='node [color=Green,fontcolor=Blue,fontname=Courier]\n'
name='hierarchy_of_%s' % cls.__name__
code='\n'.join(codegenerator(cls))
return "digraph %s{\n\n%s\n%s\n}" % (name, setup, code)
def codegenerator(cls):
"Returns a line of dot code at each iteration."
# works for new style classes; see my Cookbook
# recipe for a more general solution
for c in cls.__mro__:
bases=c.__bases__
if bases: # generate edges parent -> child
yield ''.join([' %s -> %s\n' % ( b.__name__,c.__name__)
for b in bases])
if len(bases) > 1: # put all parents on the same level
yield " {rank=same; %s}\n" % ''.join(
['%s ' % b.__name__ for b in bases])
if __name__=="__main__":
# returns the dot code generating a simple diamond hierarchy
class A(object): pass
class B(A): pass
class C(A): pass
class D(B,C): pass
print dotcode(D)
The function dotcode takes a class and returns the dot source code needed to plot the genealogical tree of that class. codegenerator generates the code, traversing the list of the ancestors of the class (in the Method Resolution Order of the class) and determining the edges and the nodes of the hierarchy. codegenerator is a generator which returns an iterator yielding a line of dot code at each iteration. Generators are a cool recent addition to Python; they come particularly handy for the purpose of generating text or source code.
The output of the script is the following self-explanatory dot code:
digraph hierarchy_of_D {
node [color=Green,fontcolor=Blue,font=Courier]
B -> D
C -> D
{rank=same; B C }
A -> B
A -> C
object -> A
}
Now the simple one-liner:
$ python dot.py | dot -Tpng -o x.png
generates Figure 6.
Figure 6. A Python class diagram
You may download dot and the others tool coming with GraphViz at the official GraphViz homepage. You will also find plenty of documentation and links to the mailing list.
Perl bindings (thanks to Leon Brocard) and Python bindings (thanks to Manos Renieris) are available. Also, Ero Carrera has written a professional-looking Python interface to dot.
The script dot.py I presented in this article is rather minimalistic. This is on purpose. My Python Cookbook recipe, Drawing inheritance diagrams with Dot, presents a much more sophisticated version with additional examples.
Michele Simionato is employed by Partecs, an open source company headquartered in Rome. He is actively developing web applications in the Zope/Plone framework.
Return to the LinuxDevCenter.com.
Copyright © 2009 O'Reilly Media, Inc.
|
Zakhar
Uploader sur votre Freebox Révolution à distance
UPLOAD de fichiers sur votre Freebox "distante" !
Free a récemment ouvert la possibilité d'accéder à l'interface de gestion de la Freebox V6 à distance.
Vous pouvez donc facilement "récupérer" (download) des fichiers de la Freebox distante vers votre PC, via l'interface fourni par Free, cependant faire le contraire, c'est à dire uploader un fichier se trouvant sur votre PC distant vers la Freebox est plus délicat...
... certains pensent que c'est même impossible...
Eh bien avec ce script, vous pourrez uploader des fichiers à distance sur votre Freebox en quelques clics !..
-1) Choisir le fichier à uploader + Clic-Droit + Script / upfree
-2) Mot de passe de votre Freebox (étape facultative si le mot de passe est configuré)
-3) Eh non, il n'y a pas de 3, le script fait le reste !
Bien sûr, le mode texte offre davantage de possibilités et donne plus d'informations (par exemple l'estimation du temps restant) :
zakhar@zakhar-desktop:~/Images$ upfree * -t 'Disque dur/Photos'
Mot de passe de votre FreeBox :
Fichier % Total Émis Débit Temps Temps Temps Débit
Moyen Total Passé Restant Actuel
PIC_0521.JPG 100 5040k 5040k 114k --:--:-- 00:00:44 --:--:-- 114k
PIC_0522.JPG 100 4793k 4793k 114k --:--:-- 00:00:42 --:--:-- 119k
PIC_0523.JPG 100 5048k 5048k 114k --:--:-- 00:00:44 --:--:-- 119k
PIC_0524.JPG 100 5774k 5774k 117k --:--:-- 00:00:49 --:--:-- 120k
PIC_0525.JPG 24 4809k 1170k 106k 00:00:45 00:00:11 00:00:34 97.6k
-------------------------------------------------------------------------------
Total 29 71.6M 21.3M 113k 00:10:48 00:03:13 00:07:35 97.6k
________________________________________________________________
Installation minimale
Bien évidemment, il faut avoir ouvert l'accès à distance de la Freebox V6 "distante".
Il faut ouvrir un port sur votre PC afin que la Freebox "distante" puisse venir chercher les fichiers. Pour cela, consulter le manuel de votre routeur ou de votre box (celle sur laquelle est connecté le PC, pas celle de la Freebox "distante" !).
Par exemple, si le PC est aussi sur le réseau Free (voir ma recommandation plus bas), cela se passe sur votre console en ligne chez Free dans la catégorie Internet / Routeur.
Copiez le script (clic-droit pour le sauvegarder) : lien vers le script upfree
... et c'est tout... si vous n'avez pas oublié de rendre le script exécutable !Note : le script ne nécessite PAS les droits root. Il n'écrit que dans /tmp (pour ses fichiers temporaires).Cette première page contiendra toujours la dernière version du script !(mes améliorations, ou vos contributions)HistoryVersion 1.0 du 6 novembre 2011Version 1.0.1 du 20 novembre 2011
Version 1.0.2 du 10 novembre 2012
Version 1.0.3 du 15 décembre 2012
Version 1.1.0 du 26 janvier 2013
Version 1.2.0 du 28 avril 2013(le présent script)
Dés-installation
Il suffit de supprimer le script de votre système.
Paramétrage recommandé
Avec le l'installation minimale ci-dessus, le script va vous demander à chaque fois une paire d'adresses IP/Port, et un répertoire en plus du mot de passe de la Freebox.
Vous pouvez fixer tout cela une fois pour toute dans un fichier de configuration.
Par défaut, ce fichier se trouve dans : ~/.config/freebox.conf
Exemple de fichier de configuration (il est "sourcé" par le script principal, c'est donc du code)
# Adresse IP et port d'accès à distance de votre Freebox V6
fbxIPPort="78.200.100.50:37373"
# Mot de passe d'accès à distance de votre Freebox V6
fbxPassword="Mot_2_passe"
# Adresse IP publique et port ouvert sur votre PC
localIPPort="82.50.100.200:45678"
# Chemin sous lequel sont vos fichiers à uploader
localWebRoot="/home/zakhar/upload-fbx"
# Si vous avez fait un forward et que le port local
# n'est pas le même que celui servi sur le PC, cette
# variable contient le port servi sur le PC
# opt_port='55555'
opt_s='y' # On demande à avoir un serveur Web le temps de l'upload
opt_d=30 # Délai d'affichage à 30sec
if [ "${opt_g}" = 'y' ]; then # Ci-dessous les options spécifiques en mode graphique
opt_f='y' # Continue/recommence un téléchargement interrompu
opt_t='/Disque dur/Vidéos' # Endroit où uploader les fichiers sur la Freebox V6
else # Ci-dessous les options spécifiques en mode texte
: # S'il n'y a pas d'option laisser le : pour eviter
# une erreur de syntaxe dans le else
fi
# Cible automatiquement les bons répertoires pour les fichiers mkv et mp3
autotargets=".*mkv$>/Disque dur/Vidéos|.*mp3$>/Disque dur/Musiques"
Utilisation en mode graphique : le script est prévu pour fonctionner automatiquement depuis Nautilus. Pour cela, le plus simple est de mettre un lien symbolique vers le script dans le répertoire prévu par Nautilus, par exemple, si vous avez mis le script dans ~/Scripts :
ln -s ~/Scripts/upfree ~/.gnome2/nautilus-scripts/Upload\ Freebox
...vous aurez ainsi, dans le menu "Scripts", une ligne intitulée "Upload Freebox"
Fonctionnement sur un NAS
... comme mon autre script tuXtremMerge, ça devrait fonctionner sur un NAS (Synology, Qnap, ...)
EDIT : En effet, c'est un simple script qui utilise juste les utilitaires GNU.
Même mieux que tuXtremMerge, il est compatible Posix puisqu'il fonctionne avec le moteur de script par défaut d'Ubuntu : dash. Il ne devrait donc même pas être nécessaire d'installer bash sur votre NAS.
Ainsi vous pouvez envoyer un fichier de votre NAS vers la Freebox distante. Assez pratique car comme on est limité à la vitesse de l'upload, ça peut être long sur des gros fichiers, mais comme un NAS est allumé en permanence, ce n'est pas un problème !..
Si quelqu'un veut essayer sur son NAS, il est le bienvenu.
Comme j'ai acheté un NAS (Synology DS413j), le script est donc désormais totalement fonctionnel aussi sur NAS -à compter de la 1.2.0-. Bien pratique pour économiser de l'électricité et économiser l'usure de vos PC !..
Cela nécessite cependant l'installation de deux packages (ipkg pour Synology) : coreutils et curl (libcurl).
Autres options du script
$ upfree -h
Usage : upfree [Options] Fichier [Fichiers]
Upload des fichiers sur la freebox
-c, --config Fichier de configuration (déf. ~/.config/freebox.conf )
-i, --ipfbx IP[:Port] de la freebox (déf. port 80)
-p, --password Mot de passe de connexion à la Freebox
-l, --local IP[:Port] du serveur web local (déf. port 80)
-r, --root Racine du serveur web local
-f, --force Force l'écrasement/continuation sur la freebox
-d, --display Secondes, temps entre deux affichages (défaut= 2sec)
-g, --graphic Mode graphique (automatique si lancé depuis Nautilus)
-q, --quit En mode texte, quitte à la première erreur (-g implique -q)
-s, --server Installe un serveur temporaire (échoue si le port est occupé)
-t, --target Cible pour la copie (déf. répertoire des téléchargements)
--port Port du Serveur local (utile si différent de l'option -l)
-h, --help Ce texte d'aide
-v, --verbose Mode verbeux
-V, --version Version du script
Les valeurs passées en paramètre ont priorité par rapport aux valeurs
présentes dans les fichiers de configuration.
Roadmap
... la suite, après davantage de debug et de mise en forme/commentaire du présent script, c'est de faire un "File-System-Freebox". J'ai quelques petites idées sur la façon de faire ça, mais rien de définitif encore à ce jour.
Dernière modification par Zakhar (Le 28/04/2013, à 19:37)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Considérations supplémentaires
Sécurité : ATTENTION la façon dont fonctionne l'administration à distance des Freebox Révolution est assez étonnante de la part de Free qui nous avait habitué à nettement mieux en terme de sécurité. Notamment, le mot de passe circule totalement en clair. Cela signifie que si vous faites ça au travers d'un proxy ou d'une connexion Wifi (par exemple) de source "douteuse", d'un Hotspot ouvert, d'un FreeWifi, etc.. vous risquez de vous faire piquer le mot de passe, qui permettra au malotru de vous déparamétrer totalement votre installation entre autres mauvaises farces.
Il n'y a malheureusement rien qu'on puisse faire, ni avec ce script, ni autrement, pour remédier à cette situation regrettable.
Si ce n'est pas votre propre Freebox, je vous conseille donc de ne donner votre mot de passe qu'à des personnes en lesquelles vous avez 100% confiance, et si possible, à condition qu'elles soient elles-mêmes chez Free et via une connexion ethernet... ça évitera les tentatives d'espionnage (qui mettraient une mauvaise image) de SFR (par exemple !) des mots de passes des Freebox.
Principe de fonctionnement
Certains déploraient que l'upload ne soit pas possible avec l'interface à distance. Et pourtant si... le principe est une peu similaire à l'idée derrière le FTP passif : puisqu'on ne sait pas de façon "active" envoyer un fichier vers la Freebox, on va lui demander de venir télécharger un fichier chez nous.
Cela est possible car c'est toute l'administration distante qui est ouverte, et non pas juste "le NAS". On va donc disposer d'un serveur Web sur notre PC et via la seedbox, donner l'adresse du fichier qu'on veut uploader (considéré donc comme un "download" pour la Freebox).
Configuration avancée
Installation et configuration d'un serveur Web
Dans la configuration "simple" ci-dessus, le script prend en charge la mise en place d'un serveur Web temporaire (le temps du script).
Cependant ce serveur web est assez limité, notamment il ne permet pas une fonction importante : la reprise après interruption.
Si vous avez des gros fichiers à uploader, ça peut être assez ennuyeux de devoir tout recommencer.
Rappelez-vous que l'upload est limité à la vitesse de la ligne "montante" de votre PC. Sur une bonne connexion ADSL "standard" on plafonne à 1Mbps.
Pour vous donner une idée : 1Gio = 2h30 !
Ensuite nous allons configurer et bien isoler dans un VirtualHost notre répertoire d'upload, tel que spécifié dans la fichier de configuration plus haut (voir post #1)
gksudo gedit /etc/apache2/sites-available/upload-fbx
et configurez ainsi :
Contenu du fichier /etc/apache2/sites-available/upload-fbx a écrit :
# ===========================
# Definitions for upload fbx
# ===========================
NameVirtualHost *:45678
<VirtualHost *:45678>
ServerName upload-fbx
DocumentRoot "/home/zakhar/upload-fbx"
ErrorLog /var/log/apache2/upload-fbx.log
TransferLog /var/log/apache2/upload-fbx_access.log
<Directory />
Options FollowSymLinks Indexes Multiviews
AllowOverride All
Order deny,allow
Deny from all
Allow from 127.0.0.1
Allow from 192.168.0
Allow from 78.200.100.50
</Directory>
</VirtualHost>
ensuite rendez le site disponible ainsi :
cd /etc/apache2/sites-enabled/
sudo ln -s ../sites-available/upload-fbx
sudo service apache2 restart
Si tout va bien, vous ne devriez pas avoir de message d'erreur lors du redémarrage de votre serveur Apache2 (dernière commande ci-dessus).
N'oubliez pas, si vous avez misopt_s='y'dans le fichier de configuration, de le retirer ou le mettre en commentaire, sinon le script refusera de se lancer et vous avertira qu'il ne peut pas activer le serveur temporaire puisque vous avez désormais un serveur actif sur le port.
Si vous lancez le script en mode "verbeux" (option -v) vous devriez désormais voir, dans la phase de vérification :
⬕ Utilisation du serveur déjà installé sur le port local 45678
⬕ Ce serveur supporte la reprise d'un téléchargement interrompu (Range)
Entre autres choses, vous pourrez donc désormais :
- Reprendre un téléchargement interrompu (option -f)
- Lancer plusieurs téléchargements en parallèle (graphiques ou texte, peu importe)
.
Configuration iptables pour votre PC
Afin d'améliorer la sécurité de son propre PC, il est conseillé d'ajouter des règles de filtrage, puisqu'on vient d'ouvrir un port. Le serveur temporaire et le script assurent certaines protections (voir dans les commentaires), mais c'est très loin d'être infaillible !
Et bien sûr, si vous installez un "vrai" serveur Web (paragraphe 1 ci-dessus) pour remédier au limitations du serveur temporaire, là ça devient
"obligatoire".
gksudo gedit /etc/init.d/myiptables.sh
et configurez ainsi :
Contenu du fichier /etc/init.d/myiptables.sh a écrit :
#! /bin/sh
# Accept HTTP for Fbx Upload from local network, localhost, and our Distant Freebox
UPFBX_PORT=45678
iptables -A INPUT -p tcp --dport ${UPFBX_PORT} -s 78.200.100.50 -j ACCEPT
iptables -A INPUT -p tcp --dport ${UPFBX_PORT} -s 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport ${UPFBX_PORT} -s 127.0.0.1 -j ACCEPT
iptables -A INPUT -p tcp --dport ${UPFBX_PORT} -j LOG -m limit --limit 5/m --limit-burst 7 --log-prefix '** HACKERS **' --log-level 4
iptables -A INPUT -p tcp --dport ${UPFBX_PORT} -j DROP
Testez que le script fonctionne bien en le rendant exécutable et en le lançant :
sudo chmod +x /etc/init.d/myiptables.sh
sudo /etc/init.d/myiptables.sh
Vérifiez que vous avez toujours accès à votre serveur en local (règle 127.0.0.1 ci-dessus)
Si tout est OK, il ne reste plus qu'à inscrire le script dans le démarrage, ainsi :
sudo update-rc.d -n myiptables.sh start 80 1 2 3 4 5 .
Pour vérifier que vous n'avez pas d'attaques, vous pouvez faire alors :
grep '** HACKERS **' /var/log/messages
Sur Precise :
grep '** HACKERS **' /var/log/kern.log
... et rassurez-vous, même si certains viennent
tenter leur chance, ils se"cassent le nez"avec nos iptables bien réglées !
Dernière modification par Zakhar (Le 26/01/2013, à 12:37)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Post réservé pour la documentation du projet. (2)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Post réservé pour la documentation du projet. (3)
... maintenant c'est à vous !
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
20 novembre 2011
Version 1.0.1
Fonctionne désormais aussi avec un disque USB externe connecté. On peut également copier vers le disque externe si on le souhaite.
Corrections de bugs divers.
Amélioration des commentaires.
P.S. : le Freebox-FS avance en parallèle. Finalement ce sera pour le moment en C. Je ne suis pas trop loin d'avoir déjà un truc qui fonctionne en "read-only".
Dernière modification par Zakhar (Le 20/11/2011, à 15:45)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
oMerr
Re : Uploader sur votre Freebox Révolution à distance
Cool, ça marche !!
Hors ligne
benoitseize
Re : Uploader sur votre Freebox Révolution à distance
Bonjour,
Super tutoriel mais ça ne marche pas pour moi.
Je souhaite transférer des fichiers de ma kimsufi vers ma freebox v6.
J'ai bien copié le script et paramétré freebox.conf mais quand je lance le script, j'ai err 7.
Est ce un problème de port ? Je ne sais pas du tout le port utilisé par ma kim. Lors de l'install d'apache, j'ai pris les paramètres par défaut. Donc, j'imagine que le port utilisé est 8080 mais je n'en suis pas certain.
Merci de votre aide !
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Bonjour,
il faudrait que tu m'en dises un peu plus, notamment comment tu le lances (liste des paramètres, bien sûr sans mettre en clair tes IP et mot de passe !).
Enfin pour avoir un peu plus de détail sur où ça se plante, il faut le lancer avec trace, du genre :
dash -xu upfree fichier_local
Ca va produire plein de bavardage sur l'écran et ça aide à voir d'où vient le problème !
Et cela dit, pour ton port local de la kimsufi, avant de se lancer dans les opérations avec le script, tu peux simplement essayer avec l'interface standard de ta Freebox.
Il s'agit d'aller sur la seedbox et de taper :
http://ip.ta.kim.sufi:port/fichier
Déjà si ça marche comme ça, tu as une solution "manuelle" avant d'envisager le script qui te permettra d'envoyer des fichiers en nombre.
Dernière modification par Zakhar (Le 12/02/2012, à 16:49)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
benoitseize
Re : Uploader sur votre Freebox Révolution à distance
Bonjour,
il faudrait que tu m'en dises un peu plus, notamment comment tu le lances (liste des paramètres, bien sûr sans mettre en clair tes IP et mot de passe !).
Enfin pour avoir un peu plus de détail sur où ça se plante, il faut le lancer avec trace, du genre :
dash -xu upfree fichier_local
Ca va produire plein de bavardage sur l'écran et ça aide à voir d'où vient le problème !
Et cela dit, pour ton port local de la kimsufi, avant de se lancer dans les opérations avec le script, tu peux simplement essayer avec l'interface standard de ta Freebox.
Il s'agit d'aller sur la seedbox et de taper :<metadata lang=INI prob=0.06 />
http://ip.ta.kim.sufi:port/fichier
Déjà si ça marche comme ça, tu as une solution "manuelle" avant d'envisager le script qui te permettra d'envoyer des fichiers en nombre.
Bonjour et merci de ta réponse,
Tout d'abord, je précise que je suis un utilisateur occasionnel et peu expérimenté d'ubuntu !
Voila mon fichier freebox.conf:
fbxIPPort="82.xxx.xx.xxx:21"
fbxPassword="xxxxxxx"
localIPPort="xx.xx.xx.xx:7500"
localWebRoot="/home/xxxxx/downloads"
# opt_port='55555'
opt_s='y'
if [ "${opt_g}" = 'y' ]; then
opt_f='y'
opt_t='/Disque dur/xxxx'
else
Mon pc est sous seven, donc je gère (dans ce cas là) ma kimsufi par NX.
Le script s'execute bien et au bout de quelques minutes, j'ai le message "erreur curl:7"
J'ai le sentiment que le problème vient de opt_port qu'il faudrait renseigner, mais je ne sais pas par quelle valeur.
:
fi
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Oui effectivement, voici ce que dit le manuel de curl que tu peux obtenir (via NX... excellent choix d'ailleurs NX, ça marche super bien !).
$man curl 7 Failed to connect to host.
Donc le script n'a pas réussi à se connecter à ta Freebox.
Autre remarque, à la base c'est une très mauvaise idée d'assigner le port 21 pour l'accès Web à distance de ta Freebox !..
En effet, ce port est en général réservé pour FTP, et probablement déjà utilisé en interne de la Freebox... pas sûr que ce soit totalement testé et que la Freebox ne se prenne pas les "pieds dans le plat" avec ça.
De plus, c'est un port qui est forcément scanné par les "pirates", et tu augmentes ainsi les risques de problème.
Je te suggère donc d'utiliser plutôt un port "utilisateur", en prenant un "gros" chiffre, par exemple entre 40000 et 50000.
Ensuite avant même de songer à faire tourner le script tu peux faire un test que ta Freebox répond bien.
Tout simplement tu demandes l'écran de login, comme ça:
$ curl 'http://78.78.78.78:45654/login.php'
(bien sûr en la bonne adresse IP et le bon port, celui que tu as choisi pour l'admin à distance).
En principe ça devrait te balancer du code HTML dans la console, c'est ta page de connexion. Si ça ne le fait pas, c'est que tu as un problème de réglage de tes ports.
Pour l'instant opt_port n'est pas concerné, là le script n'arrive même pas à contacter ta Freebox, il n'en est donc pas encore au point de commencer l'upload !..
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
benoitseize
Re : Uploader sur votre Freebox Révolution à distance
@Zackhar
Un immense merci pour ta patience et la pertinence de ton aide.
J'avais en effet un double problème de ports.
Ma freebox est sur le port 80 (implicite) je ne sais pas pourquoi j'ai mis 21, sans doute par distraction !
Apache communique sur le port 7500
Ceci étant réglé, ça marche nickel !
Merci pour ce super script.
J'espère que cette conversation pourra aider les gens ayant mon profil.
Cordialement.
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Merci pour ton retour !
Et pareil que ci-dessus... je te suggère de ne pas utiliser le port 80 pour l'accès à distance parce que c'est sûr là les "pirates" peuvent s'en donner à coeur joie à essayer des mots de passe à l'infini !.. Ta Freebox est donc très vulnérable avec ce port par défaut, à coup sûr c'est le premier que va tester quelqu'un qui cherche à s'introduire chez toi.
Le risque, si le mot de passe est découvert, est qu'un petit malin te "casse tous tes réglages", te détruise les fichiers hébergés sur ta Freebox, te la remplisse de saletés histoire que les fonctions d'enregistrement plantent... bref, mieux vaut prendre un port "exotique", ça te coûte quelques secondes à faire, et même si ce n'est pas une protection absolue, c'est toujours ça de gagné !
Dernière modification par Zakhar (Le 14/02/2012, à 20:07)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
benoitseize
Re : Uploader sur votre Freebox Révolution à distance
Je vais mettre en application tes excellents conseils !
Bonne journée.
Hors ligne
patxixi
Re : Uploader sur votre Freebox Révolution à distance
Bonjour Zakhar,
J'ai lu avec attention 2 de tes sujets (tuXtremMerge et celui-ci), et je me pose une question quand aux commandes Posix intégrées au NAS de la Freebox, si tel est le cas..
Ma problématique est la suivante : je souhaiterais faire transférer du contenu stocké sur ma Freebox v6 (depuis un dd externe branché en e-sata pour être exact) vers un serveur externe (accessible par un montage webdav). En gros l'inverse de ce qui est traité dans ce présent forum.
Cela me permettrait, vu le débit d'upload en ADSL, de laisser faire la fbx pour uploader mes fichiers.
Qu'en penses-tu ?
De plus, j'ai cru comprendre que tu développais un freebox-fs. Si tu pouvais ouvrir un post à ce sujet pour nous expliquer ce que tu as en tête.
Merci pour tes lumières.
Patxixi
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Non hélas, ce que tu souhaites faire ne me semble pas possible.
La Freebox tourne bien avec du logiciel libre, les sources ont été finalement publiés (cf procès) ainsi que les modifications apportées par Free. Mais hélas, tout ceci est "tivoïsé", chose qui fait hurler Richard Stallmans, chantre du libre, mais c'est ainsi.
Donc en gros, même si tout est libre, comme tu n'as pas accès à la Freebox via l'équivalent d'un terminal... eh bien en réalité la "Liberté" de l'utilisateur est fort bridée. En gros tu es limité aux interfaces publiées disponibles : Samba, FTP, http en local, et http à distance + le client de téléchargement (http + torrent).
Si le serveur WebDAV est un truc à toi que tu maîtrises, auquel cas il te suffit de mettre dessus un client http (un bon script curl suffit) et tu peux faire la fonction souhaitée. Mais dans le cas où c'est un stockage "public" sur lequel tu ne peux pas intervenir, il n'y a hélas pas d'espoir dans le sens de copie que tu veux utiliser.
Pour le freebox-fs, pas de miracle non plus. Je ne suis pas développeur chez Free, et ce n'est donc pas hélas un truc qui tourne dans la Freebox, c'est un truc à mettre sur ton PC pour accéder à la Freebox comme si c'était une clé USB (juste plus lent, genre USB 1 anémique, vu la vitesse ADSL montante !).
La partie read-only fonctionne.
Cherche sur le forum tu devrais trouver... mais ce n'est d'aucune utilité pour ce que tu souhaites faire hélas.
Dernière modification par Zakhar (Le 17/09/2012, à 18:50)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
patxixi
Re : Uploader sur votre Freebox Révolution à distance
Merci pour ta réponse.
Effectivement j'ai trouvé par la suite quelques post sur ton freebox-fs.
Pour ce qui est du webdav, non je ne le maîtrise pas (c'est O^H pour ne pas les citer).
Peut-etre qu'un jour Free ouvrira un SDK pour la fbx v6, autre que pour la télécommande.
Merci et bonne continuation.
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Bah... c'est probablement déjà trop tard, ils sont certainement en train de préparer la 7 !..
Et si ça se trouve ça se rapprochera d'un "vrai" NAS sur lequel on dispose d'un shell pour faire tourner des machins.
Dernière modification par Zakhar (Le 17/09/2012, à 23:29)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
cabrette
Re : Uploader sur votre Freebox Révolution à distance
bonjour,
peut on utiliser ce script avec Ubuntu démarré par le CD ?
merci de vos réponses
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
La réponse est donc : oui, on peut faire tourner le script avec un Ubuntu démarré sur un Live-CD (cf le 'Edit')
Je suis en train de tester, il y a un petit bug lorsqu'on n'a pas de fichier de configuration (je corrige ça), et aussi sur les dernières versions, curl n'est pas installé par défaut.
Je rajoute donc un test et un message qui indique comment installer curl.
Une fois tout ceci testé, je remets en ligne et ce sera la version 1.0.2 donc.
[Edit] : voila c'est bon, le script est à jour, il est testé en live-CD et fonctionne (12.04)
Comme dit plus haut, la première fois il vous sera demandé d'installer curl puisqu'il n'est plus par défaut sur le dernières Ubuntu.
La commande pour cela est :
sudo apt-get install curl
... elle est indiquée par le script si toutefois on ne s'en souvient plus !
Ah oui aussi... pour éviter d'avoir à installer Apache ou un autre serveur, je vous conseille de lancer le script avec l'option -s (sinon de toute façon il le suggérera). Ainsi le script s'occupe tout seul de lancer un serveur web (sans mode 'reprise du téléchargement', mais ce n'est pas trop utile la reprise sur un Live-CD !)
... n'oublez pas aussi que si le PC sur lequel vous faites tourner le script est derrière une box (Freebox par ex.) en mode routeur, vous devez paramétrer une redirection pour que cela fonctionne. Et du côté de la Freebox sur laquelle vous voulez uploder, il faut évidemment que l’administration à distance ait été paramétrée et que vous en possédiez le mot de passe !
Dernière modification par Zakhar (Le 11/11/2012, à 15:22)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
cabrette
Re : Uploader sur votre Freebox Révolution à distance
merci pour ces précisions
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
15 décembre 2012Version 1.0.3
Ajout de l'option -d (--display) qui permet de régler la fréquence de rafraîchissement de l'affichage de la progression d'upload.
Par défaut c'est 2 secondes (comme l'interface Free standard), l'idée de l'option est de mettre un chiffre supérieur, 30 pour 30 sec par exemple, ainsi la mise à jour de l'affichage de progression est moins fréquent, mais on sauvegarde de la bande passante utile pour l'upload.
En effet, supposons que la requête de mise à jour fasse environ 500 octets en upload (headers essentiellement), on consomme alors 2Kbps moyen de bande passante sur notre upload, juste pour avoir un affichage fréquent. Ainsi, si la limite d'upload est de 120K, on plafonnera plutôt autour de 118K.
Cette option est donc un "compromis" entre un affichage fréquent et une meilleure optimisation de la bande passante. Elle est bien sûr d'autant plus utile que votre ligne est à faible débit.
Dernière modification par Zakhar (Le 15/12/2012, à 13:53)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
root
Re : Uploader sur votre Freebox Révolution à distance
Bonjour Zakhar,
J'utilise ton script depuis 1 semaine, et il est parfait (exactement ce dont j'avais besoin)
Mais depuis 2 jours, patatra, plus moyen de le faire marcher. Visiblement la dernière mise a jour freebox pose quelque problème.
Travailles-tu toujours sur ce script, comptes-tu le maintenir ou faut-il se mettre a activement a trouver une autre solution ?
Merci encore pour ton investissement et bravo pour ton boulot (car pour moi ça marchait impec' )
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Bonjour Root, et merci de ton retour.
Oui, je travaille toujours sur ces scripts (upfree et dlfree) et effectivement, la 1.1.9 oblige à insérer un jeton CSRF qui n'existait pas auparavant. Donc sans cela, on a une erreur 403 (Forbidden) et le script ne fonctionne plus.
Je vais faire la modification adéquate, mais pour l'instant la Freebox qui me sert de test étant celle de ma mère, je n'ose pas lui couper internet "en soirée" comme ça... ça risque de lui foutre en l'air une conversation téléphonique ou sa télé.
... il faudra donc être un peu patient (ou modifier le script toi-même si tu t'en sens le courage). En attendant, le mieux est que tu utilises l'interface standard de Free via la page Web... sauf si bien sûr tu utilises le serveur Web intégré au script... là c'est un chouia plus compliqué.
Dis-moi ton degré d'urgence, et je vois si j'embête ma mère avec un reboot. Car sans ça, je ne peux pas tester (vu qu'elle est en 1.1.8) et faire des modifications "en aveugle"... c'est pas facile !
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
root
Re : Uploader sur votre Freebox Révolution à distance
Merci pour la reponse rapide.
Je n'ai aucune urgence et ne veux pas te mettre de pression (vu le prix manquerais plus que ça). Je voulais juste savoir si tu utilisais toujours ce script et si il fallait s'attendre a une mise a jour prochainement.
Si tu pense le mettre a jour c'est cool, sinon j'aurais mis les mains dans ton script. Je fais un peu de scripting, mais j'avoue volontiers que tout ce qui est parsing XML/JSON et ce genre de format, ça m'horipile. Je vais donc attendre un mise a jour, ça m'évitera de m'arracher les cheveux
Merci encore !
Hors ligne
Zakhar
Re : Uploader sur votre Freebox Révolution à distance
Oui, je l'utilise une ou deux fois par semaine pour mettre des trucs sur la Freebox de ma mère.
Donc au passage en 1.1.9 je ferai la modif.
Pour info, la modification à faire a été explorée le W.E. dernier, l'échange sur le forum est là : http://forum.ubuntu-fr.org/viewtopic.php?id=448343&p=2
Donc comme on dit... yaka fokon !
Dernière modification par Zakhar (Le 23/01/2013, à 09:26)
"A computer is like air conditioning: it becomes useless when you open windows." (Linus Torvald)
Hors ligne
|
bertrand47
[Résolu] Duplicate sources list
J'ai depuis quelques jours une erreur "duplicate sources list", visiblement du à un doublon i386 et amd64. J'ai une installation amd64.
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/main amd64 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_main_binary-amd64_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/restricted amd64 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_restricted_binary-amd64_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/universe amd64 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_universe_binary-amd64_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/multiverse amd64 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_multiverse_binary-amd64_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/main i386 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_main_binary-i386_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/restricted i386 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_restricted_binary-i386_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/universe i386 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_universe_binary-i386_Packages)
W: Duplicate sources.list entry http://security.ubuntu.com/ubuntu/ precise-security/multiverse i386 Packages (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_multiverse_binary-i386_Packages)
W: Duplicate sources.list entry http://extras.ubuntu.com/ubuntu/ precise/main amd64 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages)
W: Duplicate sources.list entry http://extras.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages)
Avec:
cat /etc/apt/sources.list
J'obtiens:
bertrand@bertrand-HP-Compaq-dc5750:~$ cat /etc/apt/sources.list
# deb cdrom:[LuninuX OS 12.00 _Purple Possum_ - amd64 Beta 2]/ dists/precise/main/binary-i386/
# deb cdrom:[LuninuX OS 12.00 _Purple Possum_ - amd64 Beta 2]/ dists/precise/restricted/binary-i386/
# deb cdrom:[LuninuX OS 12.00 _Purple Possum_ - amd64 Beta 2]/ precise main restricted
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://fr.archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise main restricted
## Major bug fix updates produced after the final release of the
## distribution.
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-updates main restricted
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://fr.archive.ubuntu.com/ubuntu/ precise universe
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise universe
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-updates universe
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://fr.archive.ubuntu.com/ubuntu/ precise multiverse
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise multiverse
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates multiverse
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-updates multiverse
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://fr.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
deb http://fr.archive.ubuntu.com/ubuntu/ precise-security main restricted
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-security main restricted
deb http://fr.archive.ubuntu.com/ubuntu/ precise-security universe
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-security universe
deb http://fr.archive.ubuntu.com/ubuntu/ precise-security multiverse
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-security multiverse
## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu precise partner
# deb-src http://archive.canonical.com/ubuntu precise partner
## This software is not part of Ubuntu, but is offered by third-party
## developers who want to ship their latest software.
deb http://extras.ubuntu.com/ubuntu precise main
deb-src http://extras.ubuntu.com/ubuntu precise main
Quelqu'un peut'il m'aider ?
Dernière modification par bertrand47 (Le 24/05/2012, à 14:59)
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
Salut
As-tu des fichiers *.list dans le dossier /etc/apt/sources.list.d/ ?
Que donne :
cat /etc/apt/sources.list.d/*.list
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
bertrand47
Re : [Résolu] Duplicate sources list
Ca donne ça:
bertrand@bertrand-HP-Compaq-dc5750:~$ cat /etc/apt/sources.list.d/*.list
deb http://ppa.launchpad.net/atareao/atareao/ubuntu precise main
deb-src http://ppa.launchpad.net/atareao/atareao/ubuntu precise main
deb http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu precise main
deb-src http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu precise main
# deb http://ppa.launchpad.net/jan-hoffmann/gnome-shell/ubuntu precise main
# deb-src http://ppa.launchpad.net/jan-hoffmann/gnome-shell/ubuntu precise main
# deb http://ppa.launchpad.net/docky-core/stable/ubuntu precise main
# deb-src http://ppa.launchpad.net/docky-core/stable/ubuntu precise main
deb http://ppa.launchpad.net/yannubuntu/boot-repair/ubuntu precise main
deb-src http://ppa.launchpad.net/yannubuntu/boot-repair/ubuntu precise main
deb http://ppa.launchpad.net/gwendal-lebihan-dev/cinnamon-stable/ubuntu precise main
deb-src http://ppa.launchpad.net/gwendal-lebihan-dev/cinnamon-stable/ubuntu precise main
deb http://ftp.free.org/mirrors/archive.ubuntu.com/ubuntu/ precise-backports multiverse
deb-src http://ftp.free.org/mirrors/archive.ubuntu.com/ubuntu/ precise-backports multiverse
# deb http://archive.canonical.com/ubuntu precise partner
# deb-src http://archive.canonical.com/ubuntu precise partner
# deb http://extras.ubuntu.com/ubuntu precise main
# deb-src http://extras.ubuntu.com/ubuntu precise main
deb http://ppa.launchpad.net/satyajit-happy/themes/ubuntu precise main
deb-src http://ppa.launchpad.net/satyajit-happy/themes/ubuntu precise main
deb http://ppa.launchpad.net/tiheum/equinox/ubuntu precise main
deb-src http://ppa.launchpad.net/tiheum/equinox/ubuntu precise main
deb http://ppa.launchpad.net/tualatrix/ppa/ubuntu precise main
deb-src http://ppa.launchpad.net/tualatrix/ppa/ubuntu precise main
deb http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu precise main
deb-src http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu precise main
deb http://ppa.launchpad.net/webupd8team/gnome3/ubuntu precise main
deb-src http://ppa.launchpad.net/webupd8team/gnome3/ubuntu precise main
Hors ligne
crasyo
Re : [Résolu] Duplicate sources list
Bonjour !
j'ai exactement le même problème après upgrade de oneiric à precise !
et voici le list et le *list :
deb http://ppa.launchpad.net/osmoma/audio-recorder/ubuntu precise main
deb-src http://ppa.launchpad.net/osmoma/audio-recorder/ubuntu precise main
deb-src http://archive.ubuntu.com/ubuntu precise main restricted #Added by software-properties
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://fr.archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise restricted main multiverse universe #Added by software-properties
## Major bug fix updates produced after the final release of the
## distribution.
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-updates restricted main multiverse universe #Added by software-properties
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://fr.archive.ubuntu.com/ubuntu/ precise universe
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates universe
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://fr.archive.ubuntu.com/ubuntu/ precise multiverse
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates multiverse
## Uncomment the following two lines to add software from the 'backports'
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
# deb-src http://fr.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu precise-security main restricted
deb-src http://security.ubuntu.com/ubuntu precise-security restricted main multiverse universe #Added by software-properties
deb http://security.ubuntu.com/ubuntu precise-security universe
deb http://security.ubuntu.com/ubuntu precise-security multiverse
## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
deb http://archive.canonical.com/ubuntu precise partner
deb-src http://archive.canonical.com/ubuntu precise partner
## This software is not part of Ubuntu, but is offered by third-party
## developers who want to ship their latest software.
deb http://extras.ubuntu.com/ubuntu precise main
deb-src http://extras.ubuntu.com/ubuntu precise main
deb http://fr.archive.ubuntu.com/ubuntu/ precise restricted main multiverse universe
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
## Major bug fix updates produced after the final release of the
## distribution.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://fr.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
deb-src http://fr.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse #Added by software-properties
## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
## This software is not part of Ubuntu, but is offered by third-party
deb http://fr.archive.ubuntu.com/ubuntu/ precise-proposed restricted main multiverse universe
## developers who want to ship their latest software.
Me voilà bien perplexe, ça ne m'est jamais arrivé ! J'ajoute que du coup, je ne peux pas faire les mises à jour en mode console, cela me dit en boucle de corriger le problème en effectuant apt-get update...
Merci de m'aider si c'est possible
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
Déjà, commencer par remplacer tout le contenu du fichier /etc/apt/sources.list par ça :
## DEPOTS OFFICIELS ##
deb http://fr.archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse
deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse
deb http://fr.archive.ubuntu.com/ubuntu/ precise-security main restricted universe multiverse
deb http://fr.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
## DEPOTS SUPPLEMENTAIRES ##
# deb http://archive.canonical.com/ubuntu precise partner
deb http://extras.ubuntu.com/ubuntu precise main
Ça sera beaucoup plus clair.
Ce fichier est un fichier système, donc il faudra les droits d'administration (root) pour le modifier. Une petite recherche vous apprendra comment faire.
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
bertrand47
Re : [Résolu] Duplicate sources list
Merci.
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
C'est bon ?
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
bertrand47
Re : [Résolu] Duplicate sources list
parfait, merci.
Hors ligne
crasyo
Re : [Résolu] Duplicate sources list
Merci,
je viens à l'instant de procéder à cette modification et à un update, ça marche !
Il est agréable de lire des gens compétents. Et si rapides pour dépanner.
Une idée de ce qui a pu se passer ??? que je ne recommence pas...
Hors ligne
analogfaz
Re : [Résolu] Duplicate sources list
Bonjour,
en plus des dépôts ubuntu, j'ai un tas de canonical :
deb http://archive.canonical.com/ubuntu precise partner
deb-src http://archive.canonical.com/ubuntu precise partner
deb http://archive.canonical.com/ubuntu precise-updates partner
deb-src http://archive.canonical.com/ubuntu precise-updates partner
deb http://archive.canonical.com/ubuntu precise-backports partner
deb-src http://archive.canonical.com/ubuntu precise-backports partner
deb http://archive.canonical.com/ubuntu precise-security partner
deb-src http://archive.canonical.com/ubuntu precise-security partner
# deb http://archive.canonical.com/ubuntu precise-proposed partner
# deb-src http://archive.canonical.com/ubuntu precise-proposed partner
Cela fait-il double usage avec les dépôts ubuntu eux-mêmes ?
Ais-je interêt à les supprimer de mon sources.list ?
Hors ligne
bowmore
Re : [Résolu] Duplicate sources list
Même problème, résolu par la méthode xabilon.
"Bon bah, je vais essayer de ne pas claquer la porte en sortant"
Buzz Aldrin, le 21 juillet 1969 sur la Mer de la Tranquillité
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
@analogfaz : ces dépôts-là sont des dépôts "partner" (paquets proposés par des entreprises partenaires de Canonical), donc non, il ne font pas double usage.
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
analogfaz
Re : [Résolu] Duplicate sources list
Merci xabilon !
Hors ligne
emraude90
Re : [Résolu] Duplicate sources list
j' ai eu le même problème que bertrand47 et j'ai essayé la méthode de xabilon, j'ai eu les résultats suivants:
W: Impossible de récupérer gzip:/var/lib/apt/lists/partial/fr.archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Somme de contrôle de hachage incohérente
E: Le téléchargement de quelques fichiers d'index a échoué, ils ont été ignorés, ou les anciens ont été utilisés à la place.
pourriez vous me dire comment régler ce problème. je tiens à vous préciser que je suis débutante sur ubuntu,
Merci,
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
Essaye ceci en terminal :
sudo rm /var/lib/apt/lists/partial/* -vf
sudo apt-get update
ça consiste à supprimer les listes des paquets, et à les retélécharger.
Ceci dit, il n'est pas exclu que ce soit une erreur du dépôt lui-même, donc le problème ne viendrait pas de chez toi.
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
Troumad
Hors ligne
cdjklm
Re : [Résolu] Duplicate sources list
si je fais ta technique es ce que je perd toute mes sources,car j ai pas mal de source qui tiennes a jours mes logiciels installe et je n aimerai pas perdre ca
asus N55SF,Dual Boot Win7 Ubuntu (14.04) 64 bits,Intel® Core™ i7-2670QM CPU @ 2.20GHz × 8,GEFORCE GT555M 2GO DDR3ual,6GO Ram,HDD 750GO
Benq joybook lite Xubuntu 12.04 32bits , intel N270 1.66GHz x 2 , 945 GMA intégré 256 MO , 2GO Ram , HDD 250 GO , WIFI ,BLUETOOTH,modem 3G MC8775 intégré.
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
Salut cdjklm (tu t'es pas trop pris la tête pour trouver ton pseudo ?)
Non, cela supprime de ton disque dur les listes des paquets contenus dans les dépôts, pas les dépôts ni les paquets eux-mêmes, et un sudo apt-get update les re-télécharge.
Ta liste de sources est dans le fichier /etc/apt/sources.list (et éventuellement dans les fichiers du dossier /etc/apt/sources.list.d), ces commandes n'y touchent pas.
Si tu n'as pas confiance, tu peux faire une sauvegarde de ta (tes) liste(s) de sources.
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
cdjklm
Re : [Résolu] Duplicate sources list
ha oui merci c est vrai que je ne pense jamais a une sauvegarde pourtant m aurrai bien evider beaucoup de formatage j esseille ca.
(pour le pseudo ca viens des bornes d arcade ou il fallait 3 lettres et venant d amerique du nord cela faisait cidje mais sur internet y a un minimun donc pourquoi se prendre la tete autant suivre le clavier;)
asus N55SF,Dual Boot Win7 Ubuntu (14.04) 64 bits,Intel® Core™ i7-2670QM CPU @ 2.20GHz × 8,GEFORCE GT555M 2GO DDR3ual,6GO Ram,HDD 750GO
Benq joybook lite Xubuntu 12.04 32bits , intel N270 1.66GHz x 2 , 945 GMA intégré 256 MO , 2GO Ram , HDD 250 GO , WIFI ,BLUETOOTH,modem 3G MC8775 intégré.
Hors ligne
cdjklm
Re : [Résolu] Duplicate sources list
finalement je vais te montrer mon fichier dabord pour voir ce que tu en pense car il y en a dessus par rapport a ce qu il es conseille de mettre dans ce forum.
# deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/
# deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/
# deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb cdrom:[Ubuntu 10.10 _Maverick Meerkat_ - Release i386 (20101007)]/ dists/maverick/main/binary-i386/
deb cdrom:[Ubuntu 10.10 _Maverick Meerkat_ - Release i386 (20101007)]/ dists/maverick/restricted/binary-i386/
deb http://ftp.crihan.fr/ubuntu/ precise main restricted
deb-src http://ftp.crihan.fr/ubuntu/ precise main restricted
## Major bug fix updates produced after the final release of the
## distribution.
deb http://ftp.crihan.fr/ubuntu/ precise-updates main restricted
deb-src http://ftp.crihan.fr/ubuntu/ precise-updates main restricted
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://ftp.crihan.fr/ubuntu/ precise universe
deb-src http://ftp.crihan.fr/ubuntu/ precise universe
deb http://ftp.crihan.fr/ubuntu/ precise-updates universe
deb-src http://ftp.crihan.fr/ubuntu/ precise-updates universe
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://ftp.crihan.fr/ubuntu/ precise multiverse
deb-src http://ftp.crihan.fr/ubuntu/ precise multiverse
deb http://ftp.crihan.fr/ubuntu/ precise-updates multiverse
deb-src http://ftp.crihan.fr/ubuntu/ precise-updates multiverse
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://ftp.crihan.fr/ubuntu/ precise-backports main restricted universe multiverse
deb-src http://ftp.crihan.fr/ubuntu/ precise-backports main restricted universe multiverse
deb http://ftp.crihan.fr/ubuntu/ precise-security main restricted
deb-src http://ftp.crihan.fr/ubuntu/ precise-security main restricted
deb http://ftp.crihan.fr/ubuntu/ precise-security universe
deb-src http://ftp.crihan.fr/ubuntu/ precise-security universe
deb http://ftp.crihan.fr/ubuntu/ precise-security multiverse
deb-src http://ftp.crihan.fr/ubuntu/ precise-security multiverse
## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
deb http://archive.canonical.com/ubuntu precise partner
deb-src http://archive.canonical.com/ubuntu precise partner
## This software is not part of Ubuntu, but is offered by third-party
## developers who want to ship their latest software.
deb http://extras.ubuntu.com/ubuntu precise main
deb-src http://extras.ubuntu.com/ubuntu precise main
deb http://archive.ubuntugames.org ubuntugames main
deb-src http://archive.ubuntugames.org ubuntugames main
## Depôt MultiSystem
deb http://liveusb.info/multisystem/depot all main
deb http://dswd.github.com/Swine/repository/deb stable main
deb-src http://dswd.github.com/Swine/repository/deb stable main
deb http://ftp.crihan.fr/ubuntu/ precise-proposed restricted main multiverse universe
deb http://repository.mein-neues-blog.de:9000/ /
asus N55SF,Dual Boot Win7 Ubuntu (14.04) 64 bits,Intel® Core™ i7-2670QM CPU @ 2.20GHz × 8,GEFORCE GT555M 2GO DDR3ual,6GO Ram,HDD 750GO
Benq joybook lite Xubuntu 12.04 32bits , intel N270 1.66GHz x 2 , 945 GMA intégré 256 MO , 2GO Ram , HDD 250 GO , WIFI ,BLUETOOTH,modem 3G MC8775 intégré.
Hors ligne
xabilon
Re : [Résolu] Duplicate sources list
Les dépôts ftp.crihan.fr m'ont l'air d'être des miroirs tout à fait normaux.
Les autres dépôts, je ne les connais pas, mais c'est à toi de savoir pourquoi tu les as rajouté
Pour passer un sujet en résolu : modifiez le premier message et ajoutez [Résolu] au titre.
Hors ligne
|
I am a beat frustrated. I developed a django project locally and it worked just fine. The problems started to emerge when i moved to production. I try to host my website on "a2hosting" which allow to run django on shared hosting. The server runs some application named "Passenger".
My problem is that i cant upload image using the admin web interface of django. I am aware of django file size upload limit. I don't even get an error page, i Just get this error when i use chrome(i get similar errors when using other browsers):
No data receivedUnable to load the webpage because the server sent no data.Reload this webpage.Press the reload button to resubmit the data needed to load the page.Error code: ERR_EMPTY_RESPONSE
I also tried to set the permissions to 777, tried to change my CDN to dropbox, nothing helped. If try to add content using "Django admin" web interface ,that does not include file, it works fine. I searched the internet for hours and didn't find the answer...
I created super simple and short application to demonstrate the problem to the support team. I tried to contact the support team, but they said that the problem is my app and they cant help me.
You may review my app at : [github.com/yaronsamuel/test_project][1] I also add some relevant code from the app (same code as github):
models.py
from django.db import models
class Image(models.Model):
item_picture = models.ImageField(upload_to = 'Images/')
title = models.CharField(max_length=30 , blank=True)
settings.py
import os
# Django settings for gallery project.
LOCAL_DIR = r"c:\gallery"
IS_LOCAL = os.path.isdir(LOCAL_DIR)
if IS_LOCAL:
PROJECT_DIR = LOCAL_DIR
else:
PROJECT_DIR = r"/home/ordercak/public_html/test.ordercakeinhaifa.com/"
def relToAbs(path):
return os.path.join(PROJECT_DIR, path).replace('\\','/')
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = [
('Yaron' , 'samuel.yaron@gmail.com') ,
# ('Your Name', 'your_emai@example.com'),
]
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'gallery.db', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': '',
'PASSWORD': '',
'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
ALLOWED_HOSTS = ['*']
TIME_ZONE = 'Asia/Tel_Aviv'
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
USE_I18N = True
USE_L10N = True
USE_TZ = True
MEDIA_ROOT = relToAbs('media')
MEDIA_URL = '/media/'
STATIC_ROOT = relToAbs('static')
STATIC_URL = '/static/'
MY_STATIC_ROOT = relToAbs('static_files')
STATICFILES_DIRS = (
MY_STATIC_ROOT,
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
)
SECRET_KEY = 'f3oda#81rs%yu+*-bc%_5@*nmmf0!yiyw23d(!34awfexfc+j-'
TEMPLATE_LOADERS = (
('django.template.loaders.cached.Loader', (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
)),
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'gallery.urls'
WSGI_APPLICATION = 'gallery.wsgi.application'
TEMPLATE_DIRS = (
relToAbs('templates') ,
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'gallery',
)
TEMPLATE_CONTEXT_PROCESSORS = (
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.i18n',
'django.core.context_processors.request',
'django.core.context_processors.media',
'django.core.context_processors.static',
"django.core.context_processors.debug",
"django.contrib.messages.context_processors.messages",
) # Optional
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.JSONSerializer'
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
urls.py
from django.conf.urls import patterns, include, url
from django.contrib import admin
import settings
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
admin.autodiscover()
urlpatterns = patterns('',
url(r'^$','gallery.views.homepage' , name='index'),
url(r'^admin/', include(admin.site.urls)),
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MY_STATIC_ROOT}),
(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT}),
)
I hope u can help me :)
|
duration between time in ms and rate
hi
maybe an easy question !!!
i try to find a ratio to have the max duration between two 1 shot.
a clock in ms (make with a phasor and a rate) send a step passing from 1 to 0 (with a delay).
i search to have an equation to have the maximum duration for each time/rate report ?
a quick exemple :
allof the following text. Then, in Max, select
New From Clipboard.
----------begin_max5_patcher----------
1736.3ocyZtsbihCD.8YmuBEdNICR.FXps1T62wTSkBCx1ZWLxEH64VM4ae0
Ev.1fQHSXbdfXK.Q2G0cqta7udXg0J52wEVfOC9BXwhe8vhExgDCrn76Kr1E
883znB4kYES2sCmwrdRcNF96L43qo4.7wi3bv1niXfMXO+NHYa.qv7SgA3n3
sfL72.P.Y29CoEDZF3wGerZhRIY3X5gL4rgJGbMMiUP9IVLFD8hc4v6ywEbY
Hhwmi2xwwLkBfPg7KA3hbE+yt7.3qk2U1gcjrTLSpGvpoJhEukKl8NMHj34B
bbaOSzCrpopRpHIRPPW8uOibrZnAYQ6jZf0+jShRsDm32O7f3vS2Hze80WAL
ph3rsXPTZJH4PtDL.FYGFHWVDnWNnbrOw+HFvu0WmR16H4sSf8sw9VSiYrGM
OrmrtF0jB9x.EbXO34me9uAYT1Iye9xiskglyKUr.daH0UgTGmFHEYOJjBmV
jxCCvm1KH59bRFCD0EsBOAKcU6.OoSruTqcjjD5OJk1dFU5USiRCUtOt91lp
0P+4w6QDCxP2Bu.oZ45eatEslFmfw6U.WNOnRFGmjA1UXFvfPoOfmyMwqRSK
eoaEDZa.u7l0MEWSxRpiPyuylYaLRStkRiDjS3sYxAUSSokmWnALzcdXXE2F
fWZq4gRyFTfxRboAJ9DmQ05TJ+YZl4fCTkYn2MlmoaKnLjGEpCnbs8kKOiZF
X+XOV8TEZdDeAGXsJJaiE3qF.O7OShhe2pK8EcM8UxqkxzYT5t33HrArMYkl
Q2rIEaY3NopkZUJTt8u3.uTZC6cEfHRz1Dtu5.iU6SNxLgfx+oBgINpulrrW
MwbSHs7+z1SxVsqjafodRn+LNR8jSXAYy6.3KSSNgUvITBGG+AxIri0eeCfC
WEx3mdJoxiepGl.gu3E5AC7Ga.oRx3K8w4N777l8BBupsSG3w0XaGinyNbQQ
zF7kFM+HKFjRi+uoDQdpsnTsBoLukfQAGS1hZRMa3jNQ5gNVr.GBKvP+ZrLR
alkFfkIONinHrQFnAM3VlnpcNGefFu6i.M6I6w70S6IEKN99048LNr.C+S6C
w.1.HnfQ2Oo8ovoLupfAXhSGLIv.lHR75oS+SpNSIkvIavuOs.RU5t+PMxoq
ZCBrMMDyGQJMI3TVzzBGzRU2gMwiJ.deDp4udmW53zF9MPkmGzDpftSB.KdO
Ejd1xFYp8BTwDUevF210dvaHGulBFNujFk3fOgjT7Qbt30g03guvpYmJkkMK
KbCZqxeWzBkpYlC4Mh79vIMpadA2ia8XlC5dbV8025s3zp8Fpqjj0r2Hm+bi
Njxdq6Es1mecTLt2atSLuvZSNIglIDhV2oX3pGGeIWETvqobKuhrn8cbyLJM
cUT9QRAYUJt05.2.NJirimojnOrReP6S2GYmr88slKbVDeN1VDmSSSaMUpyb
riyjfORhweijv1JmqZZ058y1vtokqUqwOyESYVVQu9ba500orcukMtr53In1
aea5q.a43UccTbRqpgkgNTe3IC0UoZ0qpZqkptTaME1slBaJ.CDT3CkFsBtd
d9bPaaav5lmrG+UyrX7f0GaFps+bW58Une8vtWMQulUYOQPscCyunbJB8y8w
zPiQpuLOGUwlp2hay80GgOn+.HcB4Tq96YJKVsIllRyq1RJXI+O+m57Svy.B
KmTDGohka+RnN1rgRWeOeM.bO1rN15aytUX1zV8DceJ3oN+z4pmVMi7iONxm
jwQdwfkYjNqHp5dT8ycfUjdBF6ftgnHWI7QcVEhe6L8rAsTuDmuaHWPOjGW8
rJ24.zVGSvELR1obr9Rcnwytvsjjj14zHWFII6o7XgkxmlVEiUrkhCTC4VrX
b1E9GWts0Utue3sHJid1I2WhMRWbaOgxc4fbEPUEVQzQbxa7GCOTyaQL91Dq
NvTNvspJqqhS9npn4JmZnho3E9sJJsrzxSyg0C0TXhpHeaTAMex6ULT8Jz8b
FeyJLp8V2RuJZ+FK2VV12P+5bBF7GSfaXccGdpebdMAAmmjc7mqrvyq25iUz
7DbdqbJZUEeijH5N+nqrRox9T0.AzvMh77tCz0aUViWqnwIIMbBR0bf+8RNz
vpPdAs2bWYjbdXuRSkKB2E1LHVOQ5D+X6ZbQWDkq6HbWX2pqHsTCQBYOqhDD
pClBal2vGuLEz5w0sLENuXRGqoYVjBr0PlVdGJSx024SnPZHRtypDoEk7l2P
S95HRvYUl70jRyWjIOsrtmWJ4pytJtyrLAu6Lvc0Ph7m4fkHsLml4DBBzJif
yj8YIk.sRSYlkIz8kL4oyh2MYQw+xue3+A3Ky6N.
-----------end_max5_patcher-----------
thanks and best
f
|
Getting this error when running pip install -U selenium. Mid way through the script, it gets the following SyntaxError:
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "C:\Python32\Scripts\build\rdflib\setup.py", line 6, in <module>
from rdflib import __version__
File "rdflib\__init__.py", line 64, in <module>
from rdflib.term import URIRef, BNode, Literal, Variable
File "rdflib\term.py", line 367
except TypeError, te:
^
SyntaxError: invalid syntax
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "C:\Python32\Scripts\build\rdflib\setup.py", line 6, in <module>
from rdflib import __version__
File "rdflib\__init__.py", line 64, in <module>
from rdflib.term import URIRef, BNode, Literal, Variable
File "rdflib\term.py", line 367
except TypeError, te:
^
SyntaxError: invalid syntax
----------------------------------------
Command python setup.py egg_info failed with error code 1
Since it is a Syntax Error, I assume it is a python version problem, I'm running 3.2.2. Pip did come with a pip-3.2.exe file, which I tried to run. But I got the same error. I'm pretty new to Python so it might be something simple.
And how can it be a syntaxError? pip is an already made program.
Running Win7, Python 3.2.2
|
I am attempting to use South to create a migration to convert my data from using the 4326 SRID to 900913. After the migration, the coordinates remain in their 4326 format. (It's easy to tell the difference between the 4326 and 900913 projections, since the numbers are much larger in 900913)
Here are the forward() and backward() functions from that migration:
class Migration(SchemaMigration):
def forwards(self, orm):
# Changing field 'ZipCoords.point'
zips = orm.ZipCoords.objects.all()
db.alter_column('itinerary_generator_zipcoords', 'point', self.gf('django.contrib.gis.db.models.fields.PointField')(srid=900913, null=True))
for zip in zips:
zip.point.transform(900913)
zip.save()
def backwards(self, orm):
# Changing field 'ZipCoords.point'
zips = orm.ZipCoords.objects.all()
db.alter_column('itinerary_generator_zipcoords', 'point', self.gf('django.contrib.gis.db.models.fields.PointField')(null=True))
for zip in zips:
zip.point.transform(4326)
zip.save()
I am checking their values using the Django Admin. Also, interestingly, this migration "works" in reverse, it turns my coordinates into much smaller (but incorrect) numbers.
|
with inspiration from http://stackoverflow.com/a/1526245/287923, but simplifying it, i've implemented a request cache as follows:
from threading import currentThread
caches = {}
class RequestCache(object):
def set(self, key, value):
cache_id = hash(currentThread())
if caches.get(cache_id):
caches[cache_id][key] = value
else:
caches[cache_id] = {key: value}
def get(self, key):
cache_id = hash(currentThread())
cache = caches.get(cache_id)
if cache:
return cache.get(key)
return None
class RequestCacheMiddleware(object):
def process_response(self, request, response):
cache_id = hash(currentThread())
if caches.get(cache_id):
del(caches[cache_id])
return response
caches is a dictionary of cache dictionaries, accessed via get & set methods. a middleware clears the cache for the current thread in process_response method, after the response is rendered.
it is used like this:
from request_cache import RequestCache
cache = RequestCache()
cache.get(key)
cache.set(key, value)
|
How do you remove all elements from the dictionary whose key is a element of lst?
[
Further help:
For loop works on all sequences and list is a sequence.
for key in sequence: print key
use the del(key) method.
for key in list_:
if key in dict_:
del dict_[key]
map(dictionary.__delitem__, lst)
I know nothing about Python, but I guess you can traverse a list and remove entries by key from the dictionary?
newdict = dict(
(key, value)
for key, value in olddict.iteritems()
if key not in set(list_of_keys)
)
Later (like in late 2012):
keys = set(list_of_keys)
newdict = dict(
(key, value)
for key, value in olddict.iteritems()
if key not in keys
)
Or if you use a 2.7+ python dictionary comprehension:
keys = set(list_of_keys)
newdict = {
key: value
for key, value in olddict.iteritems()
if key not in keys
}
Or maybe even a python 2.7 dictionary comprehension plus a set intersection on the keys:
required_keys = set(olddict.keys()) - set(list_of_keys)
return {key: olddict[key] for key in required_keys}
Oh yeah, the problem might well have been that I had the condition reversed for calculating the keys required.
d = {'one':1, 'two':2, 'three':3, 'four':4}
l = ['zero', 'two', 'four', 'five']
for k in frozenset(l) & frozenset(d):
del d[k]
for i in lst:
if i in d.keys():
del(d[i])
|
I've been using OpenCV methods to get images from my camera. I'd like to decode QR codes from those images using the zbar library, but after I convert the images to PIL to be processed by zbar, it doesn't seem like the decoding is working.
import cv2.cv as cv
import zbar
from PIL import Image
cv.NamedWindow("camera", 1)
capture = cv.CaptureFromCAM(0)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("camera", img)
if cv.WaitKey(10) == 27:
break
# create a reader
scanner = zbar.ImageScanner()
# configure the reader
scanner.parse_config('enable')
# obtain image data
pil = Image.fromstring("L", cv.GetSize(img), img.tostring())
width, height = pil.size
raw = pil.tostring()
# wrap image data
image = zbar.Image(width, height, 'Y800', raw)
# scan the image for barcodes
scanner.scan(image)
# extract results
for symbol in image:
# do something useful with results
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
cv.DestroyAllWindows()
|
I'm learning wxPython and faced the following glitch in the tutorial example.
After the application is started it shows the drawing with sizes based on the application window's sizes. And in the very beginning it looks as it should be. But when I'm resizing the window the drawing becomes broken. Here is the video http://screencast.com/t/0XOetqJ2W5x
And here is the code:
# Chapter 8: Drawing to the Screen, Using Device Contexts
# Recipe 1: Screen Drawing
#
import os
import wx
#---- Recipe Code ----#
class Smiley(wx.PyControl):
def __init__(self, parent, size=(50,50)):
super(Smiley, self).__init__(parent,
size=size,
style=wx.NO_BORDER)
# Event Handlers
self.Bind(wx.EVT_PAINT, self.OnPaint)
def OnPaint(self, event):
"""Draw the image on to the panel"""
dc = wx.PaintDC(self) # Must create a PaintDC
# Get the working rectangle we can draw in
rect = self.GetClientRect()
# Setup the DC
dc.SetPen(wx.BLACK_PEN) # for drawing lines / borders
yellowbrush = wx.Brush(wx.Colour(255, 255, 0))
dc.SetBrush(yellowbrush) # Yellow fill
# Find the center and draw the circle
cx = (rect.width / 2) + rect.x
cy = (rect.width / 2) + rect.y
radius = min(rect.width, rect.height) / 2
dc.DrawCircle(cx, cy, radius)
# Give it some square blue eyes
# Calc the size of the eyes 1/8th total
eyesz = (rect.width / 8, rect.height / 8)
eyepos = (cx / 2, cy / 2)
dc.SetBrush(wx.BLUE_BRUSH)
dc.DrawRectangle(eyepos[0], eyepos[1],
eyesz[0], eyesz[1])
eyepos = (eyepos[0] + (cx - eyesz[0]), eyepos[1])
dc.DrawRectangle(eyepos[0], eyepos[1],
eyesz[0], eyesz[1])
# Draw the smile
dc.SetBrush(yellowbrush)
startpos = (cx / 2, (cy / 2) + cy)
endpos = (cx + startpos[0], startpos[1])
dc.DrawArc(startpos[0], startpos[1],
endpos[0], endpos[1], cx, cy)
# Draw a yellow rectangle to cover up the
# unwanted black lines from the wedge part of
# our arc
dc.SetPen(wx.TRANSPARENT_PEN)
dc.DrawRectangle(startpos[0], cy,
endpos[0] - startpos[0],
startpos[1] - cy)
#---- End Recipe Code ----#
class SmileyApp(wx.App):
def OnInit(self):
self.frame = SmileyFrame(None,
title="Drawing Shapes",
size=(300,400))
self.frame.Show()
return True
class SmileyFrame(wx.Frame):
def __init__(self, parent, *args, **kwargs):
wx.Frame.__init__(self, parent, *args, **kwargs)
# Attributes
self.panel = SmileyPanel(self)
# Layout
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.panel, 1, wx.EXPAND)
self.SetSizer(sizer)
class SmileyPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
# Layout
self.__DoLayout()
def __DoLayout(self):
# Layout a grid of 4 smileys
msizer = wx.GridSizer(2, 2, 0, 0)
for x in range(4):
smile = Smiley(self)
msizer.Add(smile, 0, wx.EXPAND)
self.SetSizer(msizer)
if __name__ == '__main__':
app = SmileyApp(False)
app.MainLoop()
As you can see the drawing funstions are placed in the OnPaint method which is binded to the wx.EVT_PAINT event. So I thought it should draw a new image on the panel every time the window is re-painted by the system.
I'm using Win7, Python2.7 and wxPython 2.8.12.1
This is important for me as I'm going to write an aplication with scalable diagrams on its window.
|
HKH
Re : Cerise 0.8 - TPE, freelances, artisans
effectivement ca ne fonctionne pas avec ces identifiants...!
J ai crée une entreprise nom : Ubuntu
login ubuntu
pass ubuntu
Bon test
Hors ligne
j1100
Re : Cerise 0.8 - TPE, freelances, artisans
Hop je m'abonne.
Ça va fortement m'intéresser dans 2-3 ans quand je serai patron
Merci beaucoup, c'est vraiment super sympa.
Hors ligne
krichtof
Re : Cerise 0.8 - TPE, freelances, artisans
Bonjour
Je suis indépendant. Utiliser Cerise m'intéresse.
Mais le site de démo semble ne pas fonctionner totalement en ce moment. Quand on essaie de créer une facture (http://demo.cerise-pgi.com/invoices/new), une erreur 500 survient :
500 Internal error
The server encountered an unexpected condition which prevented it from fulfilling the request.
Page handler: <bound method Invoices.new of <cerisepgi.controllers.invoices.Invoices instance at 0x8d7958c>>
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/CherryPy-2.3.0-py2.5.egg/cherrypy/_cphttptools.py", line 121, in _run
self.main()
File "/usr/lib/python2.5/site-packages/CherryPy-2.3.0-py2.5.egg/cherrypy/_cphttptools.py", line 264, in main
body = page_handler(*virtual_path, **self.params)
File "<string>", line 3, in new
File "/usr/lib/python2.5/site-packages/TurboGears-1.0.7-py2.5.egg/turbogears/controllers.py", line 360, in expose
*args, **kw)
File "<string>", line 5, in run_with_transaction
File "/usr/lib/python2.5/site-packages/TurboGears-1.0.7-py2.5.egg/turbogears/database.py", line 359, in so_rwt
retval = func(*args, **kw)
File "<string>", line 5, in _expose
File "/usr/lib/python2.5/site-packages/TurboGears-1.0.7-py2.5.egg/turbogears/controllers.py", line 373, in <lambda>
mapping, fragment, args, kw)))
File "/usr/lib/python2.5/site-packages/TurboGears-1.0.7-py2.5.egg/turbogears/controllers.py", line 410, in _execute_func
output = errorhandling.try_call(func, *args, **kw)
File "/usr/lib/python2.5/site-packages/TurboGears-1.0.7-py2.5.egg/turbogears/errorhandling.py", line 77, in try_call
return func(self, *args, **kw)
File "<string>", line 3, in new
File "/usr/lib/python2.5/site-packages/TurboGears-1.0.7-py2.5.egg/turbogears/identity/conditions.py", line 207, in require
return fn(self, *args, **kwargs)
File "/home/gml/turbogears/cerisepgi-chaton/CerisePGI/cerisepgi/controllers/invoices.py", line 320, in new
int(datetime.today().strftime("%d")))).count())
File "/home/gml/turbogears/cerisepgi-chaton/CerisePGI/cerisepgi/controllers/invoices.py", line 304, in byThisDay
column < datetime(year, month, day+1))
ValueError: day is out of range for month
J'imagine que c'est un problème temporaire ;-)
En tout cas, merci pour ce développement, cela a l'air pas mal du tout (si on peut créer des factures en tout cas ;-) )
Bonnes fêtes
Christophe
HKH
Re : Cerise 0.8 - TPE, freelances, artisans
salut
Comme dis sur le site de Cerise :
<<Il faut garder à l'esprit que ce serveur de démonstration utilise la version en développement de Cerise, qui change très, très souvent. Il se peut donc que des morceaux de l'application ne soient pas encore stables, soient cassés ou bien ne soient pas complets par exemple.>>
vivement la prochaine version stable qu on puisse tester..
Hors ligne
milou38
Re : Cerise 0.8 - TPE, freelances, artisans
Je me permets un petit mot concernant l'erreur de krichtof. J'utilise CerisePGI depuis plusieurs mois et j'ai aidé en toute humilité Guillaume Ludwig qui a développé ce logiciel en lui fournissant des rapports de bugs et des corrections si j'en trouvais. Pour en revenir à l'erreur ci-dessus, il me semble que j'avais trouvé qu'elle existe uniquement sur le dernier jour de tous les mois. Si on prend l'erreur
File "/home/gml/turbogears/cerisepgi-chaton/CerisePGI/cerisepgi/controllers/invoices.py", line 304, in byThisDay
column < datetime(year, month, day+1))
ValueError: day is out of range for month
En fait il y a un petit bug dans le fichier invoices.py où à la ligne 304, on doit retirer le "+1", et ça marche :
column < datetime(year, month, day))
Voilà. En revanche, je suis un peu inquiet car je n'ai plus de nouvelles du concepteur du logiciel, il n'a pas répondu à mes derniers emails (c'était il y a quelques mois) et aucun de ses différents sites n'a bougé. S'il pouvait faire savoir qu'il est toujours vivant ! (et que du coup Cerise progresse encore ...).
thx_84
Re : Cerise 0.8 - TPE, freelances, artisans
bonjour,
je suis indépendant en suisse, est ce qu'il y a un moyen de modifier la devise dans cerise?
ben, comme on fait pour signer sur l'ordi??? mince, je viens de ficher en l'air l'écran avec mon stylo...
Hors ligne
thx_84
Re : Cerise 0.8 - TPE, freelances, artisans
bonjour c'est re-moi
en voulant tester l'installation, voici ce que me crache cerise:
Traceback (most recent call last):
File "/usr/bin/tg-admin", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2566, in <module>
parse_requirements(__requires__), Environment()
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 524, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: TurboGears==1.0.8
ben, comme on fait pour signer sur l'ordi??? mince, je viens de ficher en l'air l'écran avec mon stylo...
Hors ligne
gmli
Re : Cerise 0.8 - TPE, freelances, artisans
bonjour c'est re-moi
en voulant tester l'installation, voici ce que me crache cerise:
Traceback (most recent call last):
File "/usr/bin/tg-admin", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2566, in <module>
parse_requirements(__requires__), Environment()
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 524, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: TurboGears==1.0.8
Avez-vous suivi le mini tuto pour l'installation :
http://trac.cfait.fr/CerisePGI-Trac/wiki/InstallationDeCerisePGI ?
Hors ligne
gmli
Re : Cerise 0.8 - TPE, freelances, artisans
Voilà. En revanche, je suis un peu inquiet car je n'ai plus de nouvelles du concepteur du logiciel, il n'a pas répondu à mes derniers emails (c'était il y a quelques mois) et aucun de ses différents sites n'a bougé. S'il pouvait faire savoir qu'il est toujours vivant ! (et que du coup Cerise progresse encore ...).
Coucou !
Mhm, je n'ai pas le souvenir d'avoir reçu d'emails concernant Cerise, ou alors c'était tombé pendant le moment du rush de décembre/janvier.
Récemment le projet a eu un coup de pouce grâce à la SS2L Entr'ouvert. Du coup une 0.9 devrait sûrement sortir.
En fait mon problème majeur, c'est que je n'ai pratiquement aucune information sur "qui" utilise Cerise, à part quelques mouvements sur le-petit-cerisier.eu, et mes clients directs. Et étant donné que Cerise correspond aux besoins de mes clients et des miens (ainsi que des personnes qui se manifestent auprès de moi), je n'ai plus trop de raison de bouger en ce moment.
Il y a des gens intéressés ici ? La mailing-list n'a pas supporté le passage d'une ubuntu LTS à une autre, je vais la réinstaller, et ceux qui sont intéressés pourront venir s'inscrire.
Hors ligne
Xarkam
Re : Cerise 0.8 - TPE, freelances, artisans
[HS]
Salut, gmli. Je vois que tu utilise un trac pour gérer les incidences.
Je peut te contacter à ce sujet ?
[/HS]
Hors ligne
gmli
Re : Cerise 0.8 - TPE, freelances, artisans
[HS]
Salut, gmli. Je vois que tu utilise un trac pour gérer les incidences.
Je peut te contacter à ce sujet ?
[/HS]
Pas de problème
Hors ligne
thx_84
Re : Cerise 0.8 - TPE, freelances, artisans
@gmli:
oui j'ai suivi le tutoriel sur la page citée... serait-il possible que le programme ne reconnaît pas la version que j'utilise (je suis sous jaunty)?
sinon à propos de la devise utilisée, est ce que vous avez une solution à proposer? s'il s'agit de modifier du HTML je dois pouvoir m'en charger...
ben, comme on fait pour signer sur l'ordi??? mince, je viens de ficher en l'air l'écran avec mon stylo...
Hors ligne
phi00611
Re : Cerise 0.8 - TPE, freelances, artisans
Bonjour,
J'utilise un serveur basé sur UBUNTU 9.04 (Jaunty). j'ai installé CerisePGI en suivant le Tuto http://trac.cfait.fr/CerisePGI-Trac/wik … eCerisePGI.
Malheureusement j'ai 2 erreurs qui surviennent :
Lors de l'exécution de la commande
tg-admin sql create
j'ai l'erreur suivante :
Traceback (most recent call last):
File "/usr/bin/tg-admin", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2566, in <module>
parse_requirements(__requires__), Environment()
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 524, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: TurboGears==1.0.8
Et lorsque je lance le moteur CerisePGI par la commande
python start-cerisepgi.py
j'ai l'erreur suivante :
Traceback (most recent call last):
File "start-cerisepgi.py", line 3, in <module>
pkg_resources.require("TurboGears")
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 626, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 528, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (TurboJson 1.2 (/var/lib/python-support/python2.6), Requirement.parse('TurboJson>=1.1.4,<1.2'))
Merci d'avance pour votre aide.
Hors ligne
|
How do you use selenium in Django to choose and select an option in a <select> tag of a form?
This is how far I got:
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_project_info_form(self):
# set url
self.browser.get(self.live_server_url + '/tool/project_info/')
# get module select
my_select = self.browser.find_element_by_name('my_select')
#! select an option, say the first option !#
...
|
From PEP 328, http://www.python.org/dev/peps/pep-0328/#rationale-for-relative-imports you should actually avoid naming a python module starting with a "dot" because it means relative imports in Python.
If you really insist on doing so, you can but you will have to use the imp module.
Example usage:-
import imp
with open('.secret/__init__.py', 'rb') as fp:
secret = imp.load_module('.secret', fp, '.secret/__init__.py', \
('.py', 'rb', imp.PY_SOURCE))
So for your use case where you want to load in values from db.py, it would look something like this:-
import imp
with open('.secret/db.py', 'rb') as fp:
db = imp.load_module('.secret', fp, '.secret/db.py', \
('.py', 'rb', imp.PY_SOURCE))
print db.DB_PASSWORD # This will print out your DB_PASSWORD's value. Or use it whichever way you want.
Won't advise on it though.
|
What part of the question? a, b?
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
a, for starters.
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
Know how to simulate or enumerate it?
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
I would try a simulation, but I'm on my phone. I do not have access to a CAS.
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
What do you want me to do then?
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
I was just asking if you've seen it and if you have any ideas for it...
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
I have not looked at it but first I would run a simulation. As soon as I write it of course.
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
Hm, what name did he use for you in that pdf?
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
That is the name he took from the email I sent him!
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
How many email accounts do you have?
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
2 for everyday use. 1 for sites that I do not trust.
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
Which one do you email me from? The last one?
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
One of them has no storage capability so I have clean out the spam frequently. I use the Lycos for you.
Simulation checks out his answer.
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
Know how to simulate or enumerate it?
How do I simulate it?
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
That quote was written over a thousand years ago, please refresh my memory.
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
Drop dead, the game from the dice PDF. How do I simulate it?
Here lies the reader who will never open this book. He is forever dead.
Taking a new step, uttering a new word, is what people fear most. ― Fyodor Dostoyevsky, Crime and Punishment
Offline
s=0;
fubar[L_]:=Module[{s=0,ans=L},
While[ans!={},
ans = RandomChoice[{1,2,3,4,5,6},Length[ans]];
If[Count[ans,2|5]==0,s+=Total[ans],ans=DeleteCases[ans,2|5]];
];
s]
Table[fubar[ RandomChoice[{1,2,3,4,5,6},5]],{100000}]//Mean//N
16.08264
All indentations for readability taken out.
This is a really poor effort and you should avoid programming in this manner.
In mathematics, you don't understand things. You just get used to them.I have the result, but I do not yet know how to get it.All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Offline
Hi,
Using the technique from http://mathisfunforum.com/viewtopic.php … 63#p265663, we can utilize a table instead, since recursion in this case is prohibitive.
k = 3 # a face appears k times
n = 10
throws = [[[[[[0]*n for _ in xrange(n)] for _ in xrange(n)] for _ in xrange(n)] for _ in xrange(n)] for _ in xrange(n)] # a 6-dimensional list
for a in range(k+1):
for b in range(k+1):
for c in range(k+1):
for d in range(k+1):
for e in range(k+1):
for f in range(k+1):
if (a == 0 or b == 0 or c == 0 or d == 0 or e == 0 or f == 0):
throws[a][b][c][d][e][f] = 6*k - (a+b+c+d+e+f)
else:
throws[a][b][c][d][e][f] = float(throws[a-1][b][c][d][e][f]+throws[a][b-1][c][d][e][f]+throws[a][b][c-1][d][e][f]+throws[a][b][c][d-1][e][f]+throws[a][b][c][d][e-1][f]+throws[a][b][c][d][e][f-1])/6
print throws[k][k][k][k][k][k]
Perhaps we can solve more problems involving trees this way.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Offline
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.